Agwu Naomi Nneoma
Building early-career capacity to reduce AI-driven societal and catastrophic risks through focused graduate study.
Melina Moreira Campos Lima
Assessing the Climate Potential of Catering Systems in Public Schools and Hospitals
Sara Holt
Short Documentary and Music Video
Avinash A
Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment
Feranmi Williams
A field-led policy inquiry using Nigeria’s MSME ecosystem as a global stress-test for Agentic AI governance.
AI Safety Nigeria
A low-cost, high-leverage capacity-building program for early-career AI safety and governance practitioners
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Lawrence Wagner
Ella Wei
A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Preeti Ravindra
AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety
Alex Leader
Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Mackenzie Conor James Clark
An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels
Xyra Sinclair
Unlocking the paradigm of agents + SQL + compositional vector search
Anthony Ware
Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.
Jade Master
Developing correct-by-construction world models for verification of frontier AI
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.