Ryan Celimon
Translating AI’s biggest threats into videos anyone can understand: AGI, misalignment, and job loss explained.
Lawrence Wagner
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Will Shin
A global IP project reimagining ecology and future technology and institutions through character-driven narratives.
Preeti Ravindra
AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Muhammad Ahmad
A pilot to build policy and technical capacity for governing high-risk AI systems in Africa
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Xyra Sinclair
building foundational subjective judgement infrastructure
Jade Master
Developing correct-by-construction world models for verification of frontier AI
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty
David Carel
Accelerating the adoption of air filters in every classroom
Leo Hyams
A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind
Thane Ruthenis
Research agenda aimed at developing methods for constructing powerful, easily interpretable world-models.
Aditya Arpitha Prasad
Practicing Embodied Protocols that work with Live Interfaces
Petr Salaba
work title: Seductive Machines and Human Agency
Dr. Jacob Livingston Slosser
Help get the Sapien Institute off the ground
Pedro Bentancour Garin
Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.