Dr. Jacob Livingston Slosser
Help get the Sapien Institute off the ground
Act Write Here
Creating a scalable on-ramp for writers and actors to engage with impact issues through cause-area storytelling competitions and creative collaborations.
Pedro Bentancour Garin
Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.
Martin Percy
An experimental AI-generated sci-fi film dramatising AI safety choices. Using YT interactivity to get ≈880 conscious AI safety decisions per 1k viewers.
Agwu Naomi Nneoma
Funding a Master's in AI, Ethics & Society to transition into AI governance and long-term risk mitigation, and safety-focused policy development.
Jade Master
Developing correct-by-construction world models for verification of frontier AI
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty
David Carel
Accelerating the adoption of air filters in every classroom
Leo Hyams
A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind
Thane Ruthenis
Research agenda aimed at developing methods for constructing powerful, easily interpretable world-models.
Aditya Arpitha Prasad
Practicing Embodied Protocols that work with Live Interfaces
Jared Johnson
Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use
Connor Axiotes
Making God is now raising for post-production so we can deliver a festival-ready documentary fit for Netflix acquisition.
Petr Salaba
work title: Seductive Machines and Human Agency
Aditya Raj
Current LLM safety methods—treat harmful knowledge as removable chunks. This is controlling a model and it does not work.
Akhil Puri
Original essays, videos on systemic alternatives to shift the Overton window, build cultural momentum for policies supporting long-term resilience & well-being