Michaël Rubens Trazzi
20 Weeks Salary to reach a neglected audience of 20M-30M viewers
Shawn Kulasingham
Fund a project that aims to reach millions of viewers in perpetuity. Help create the future of AI Comms.
Jord Nguyen
LLMs often know when they are being evaluated. We’ll do a study comparing various methods to measure and monitor this capability.
Anthony Duong
David Chanin
Kaynen B Pellegrino
Itay Yona
Sustaining and Scaling a Grassroots Research Collective for Neural Network Interpretability and Control
H
Arifa Khan
The Reputation Circulation Standard - Implementation Sprint
Sudarsh Kunnavakkam
Building model organisms of CoT and Python packages for intervention in reasoning traces
Belinda Mo
A comedy that gets people thinking about AI in society
Bryce Meyer
Kristina Vaia
The official AI safety community in Los Angeles
Chi Nguyen
Making sure AI systems don't mess up acausal interactions
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Sarah Wiegreffe
https://actionable-interpretability.github.io/
Igor Ivanov
Asterisk Magazine
Connor Axiotes
Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed
Steve Petersen
Teleology, agential risks, and AI well-being