Remmelt Ellen
Cost-efficiently support new careers and new organisations in AI Safety.
Epoch AI
For tracking and predicting future AI progress to AGI
Arkose
AI safety outreach to experienced machine learning professionals
Mikolaj Kniejski
Do ACE-style cost-effectivness analysis of technical AI safety orgs.
Iván Arcuschin Moreno
Iván and Jett are seeking funding to research unfaithful chain-of-thought, under Arthur Conmy's mentorship, for a month before the start of MATS.
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Marisa Nguyen Olson
Case Study: Defending OpenAI's Nonprofit Mission
Gautier Ducurtil
I need to focus myself on my studies and on creating AI Safety projects without having to take a dead-end job to fund them.
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Alex Cloud
Claire Short
Program for Women in AI Alignment Research
Liron Shapira
Let's warn millions of people about the near-term AI extinction threat by directly & proactively explaining the issue in every context where it belongs
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
Jesse Hoogland
Addressing Immediate AI Safety Concerns through DevInterp
Damiano Fornasiere and Pietro Greiner
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events
Apart Research
Support the growth of an international AI safety research and talent program