Matthew Farr
Probing possible limitations and assumptions of interpretability | Articulating evasive risk phenomena arising from adaptive and self modifying AI
McKim Jean-Pierre
Help me, an economic historian from an underrepresented background, develop tech skills and reflect on adv. technologies to pivot to an AI governance career.
Jonathan Claybrough
5 day bootcamp upskilling participants on biosecurity, to enable and empower career change towards reducing biorisks, from ML4Good organisers
Michele Rocco Smeets
A mainstream fictional TV series exposing the dangers of AI
Center for AI Policy
Advocating for U.S. federal AI safety legislation to reduce catastrophic AI risk.
Jay Luong
Travel/Accommodation support for a lead coorganiser of the Ethos+Tékhnē spring school
Rufo guerreschi
Catalyzing a uniquely bold, timely and effective treaty-making process for AI
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Murray Buchanan
Leveraging AI to enable coordination without demanding centralization
Arkose
AI safety outreach to experienced machine learning professionals
Netiwit Chotiphatphaisal
Bringing Utilitarianism to Thai Society
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Nuño Sempere
A foresight and emergency response team seeking to react fast to calamities
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Jordan Braunstein
Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Alex Lintz
Mostly retroactive funding for prior work on AI safety comms strategy as well as career transition support.
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
PauseAI US
SFF main round did us dirty!
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events