Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Quentin Feuillade--Montixi
Funding to cover the first 4 month and relocating to San Francisco
Chris Wendler
Help fund our student’s trip to NeurIPS to present his main conference paper on interpretable features in text-to-image diffusion models.
Sunday Godwin James
Funding to integrate AI satellite monitoring tools, Google GPS, Cloud-based storage software, AI-assisted geospatial analysis and real-time analytic report.
Selma Mazioud
Attending NeurIPS and the San Diego AI Alignment Workshop to advance research on neural network safety and complexity.
Leo Hyams
A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind
Michaël Rubens Trazzi
Funding gap to pay for a video editor and scriptwriter
Thane Ruthenis
Research agenda aimed at developing methods for constructing powerful, easily interpretable world-models.
Jhet Chan
A self-funded researcher presenting at NeurIPS NeurReps to showcase a new approach to geometry and cognition.
Fred Heiding
A microgrant to help us scale our research on AI-assisted online scams by building real-world defenses for seniors and collecting data for ScamBench.
Aditya Arpitha Prasad
Practicing Embodied Protocols that work with Live Interfaces
Bryce Meyer
Muhammad Ahmad Janyau
Elevating Africa’s Missing Voice in Global AI Safety
fernando yupanqui
I am making music about AI risk
Connor Axiotes
Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed
niplav
Testing intranasal orexin-A administration for sleep need reduction: a 2-3 participant self-blinded randomized controlled trial
A microgrant to help us disseminate our work on AI scams targeting seniors and present it at BSides Las Vegas, DEF CON, and other conferences
Guy
Out of This Box: The Last Musical (Written by Humans)
Akhil Puri
Original essays, videos on systemic alternatives to shift the Overton window, build cultural momentum for policies supporting long-term resilience & well-being
Lindsay Langenhoven
Support our mission to educate millions through podcasts and videos before unsafe AI development outruns human control.