Marouso Metocharaki
Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi
Ella Wei
Achieving major reductions in code complexity and compute overhead while improving transparency and reducing deceptive model behavior
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Alex Leader
Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs
Lawrence Wagner
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Preeti Ravindra
AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety
Xyra Sinclair
Unlocking the paradigm of agents + SQL + compositional vector search
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Anthony Ware
Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.
Will Shin
A global IP project reimagining ecology and future technology and institutions through character-driven narratives.
Jade Master
Developing correct-by-construction world models for verification of frontier AI
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Muhammad Ahmad
A pilot to build policy and technical capacity for governing high-risk AI systems in Africa
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty