Chris Canal
Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide
Jared Johnson
Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use
Armon Lotfi
Multi-agent AI security testing that reduces evaluation costs by 10-20x without sacrificing detection quality
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Michaël Rubens Trazzi
Funding gap to pay for a video editor and scriptwriter
Quentin Feuillade--Montixi
Funding to cover the first 4 month and relocating to San Francisco
Chris Wendler
Help fund our student’s trip to NeurIPS to present his main conference paper on interpretable features in text-to-image diffusion models.
Justin Olive
Funding to cover our expenses for 3 months during unexpected shortfall
Ethan Nelson
Leveraging a 23K Subscriber Channel to Advance AI Safety Discourse
Leo Hyams
A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind
Micheal smith
A National AI Co-Pilot for Emergency Response
Thane Ruthenis
Research agenda aimed at developing methods for constructing powerful, easily interpretable world-models.
Aditya Arpitha Prasad
Practicing Embodied Protocols that work with Live Interfaces
Selma Mazioud
Attending NeurIPS and the San Diego AI Alignment Workshop to advance research on neural network safety and complexity.
Sean Peters
I'd like to explore a research agenda at the intersection of time horizon model evaluation and control protocols.
20 Weeks Salary to reach a neglected audience of 10M viewers
Aditya Raj
Current LLM safety methods—treat harmful knowledge as removable chunks. This is controlling a model and it does not work.
Connor Axiotes
Making God is now raising for post-production so we can deliver a festival-ready documentary fit for Netflix acquisition.
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Steve Petersen
Teleology, agential risks, and AI well-being