Sean Peters
I'd like to explore a research agenda at the intersection of time horizon model evaluation and control protocols.
Aditya Arpitha Prasad
Practicing Embodied Protocols that work with Live Interfaces
Michaël Rubens Trazzi
20 Weeks Salary to reach a neglected audience of 10M viewers
Lindsay Langenhoven
Support our mission to educate millions through podcasts and videos before unsafe AI development outruns human control.
Connor Axiotes
Making God is now raising for post-production so we can deliver a festival-ready documentary fit for Netflix acquisition.
Anthony Duong
Hieu Minh Nguyen
LLMs often know when they are being evaluated. We’ll do a study comparing various methods to measure and monitor this capability.
David Chanin
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
H
Bryce Meyer
Sudarsh Kunnavakkam
Building model organisms of CoT and Python packages for intervention in reasoning traces
Chi Nguyen
Making sure AI systems don't mess up acausal interactions
Kristina Vaia
The official AI safety community in Los Angeles
Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed
Igor Ivanov
Asterisk Magazine
Shawn Kulasingham
Creating a cinematic AI safety documentary with entertainment value for the public. Need 5k to create trailer & foundational interviews.
Steve Petersen
Teleology, agential risks, and AI well-being
Itay Yona
Sustaining and Scaling a Grassroots Research Collective for Neural Network Interpretability and Control