Center for AI Policy
Advocating for U.S. federal AI safety legislation to reduce catastrophic AI risk.
Arkose
AI safety outreach to experienced machine learning professionals
Murray Buchanan
Leveraging AI to enable coordination without demanding centralization
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Amritanshu Prasad
Marisa Nguyen Olson
Case Study: Defending OpenAI's Nonprofit Mission
Nuño Sempere
A foresight and emergency response team seeking to react fast to calamities
Sterlin Lujan
Grant to Get the Institute Moving at Lightspeed
Tyler Johnston
AI-focused corporate campaigns and industry watchdog
Rebecca Petras
A system of collective action is necessary to help tech workers safely speak out about concerns
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Jordan Braunstein
Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.
Alex Lintz
Mostly retroactive funding for prior work on AI safety comms strategy as well as career transition support.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Claire Short
Program for Women in AI Alignment Research
PauseAI US
SFF main round did us dirty!
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events
Michel Justen
Help turn the video from an amateur side-project to into an exceptional, animated distillation
Apart Research
Support the growth of an international AI safety research and talent program