Kumari Neha Priya
Urgent funding needed by April 30 for graduate policy training in AI governance
Dr Richard Armitage
A trusted profession that has advocated against existential risks like nuclear war can do so again for AI — but clinicians must first be made aware of the risks
Gen-Z-focused multimedia project that will raise awareness of AI safety and x-risk
Gaetan Selle
This is a small grant buying a large increase in high-quality Francophone AI risk communication from a creator who has already a track record.
Evangale Jooste
Building an AI system where unsafe behaviour is physically impossible. Ethics proven in formal logic, locked in silicon, enforced at every training step.
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Zaelani
18+ preprints across multiple fields, all written on a 2GB RAM phone. $600 removes the only thing standing between me and the next body of work.
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Covering the article processing charge for an accepted Analysis paper (Open Access) calling for collaboration between public health and existential risk studies
Nicholas Kruus
LLMs could automate intelligence analysis. I wrote the first paper on governing this; $5k buys 2mo to revise it and scope an org research branch
Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bridging EA with wisdom
Mox
An incubator & community space in SF; for doers of good and masters of craft
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Lawrence Wagner
A benchmark for studying how failures spread across multi-agent AI systems and whether they can be detected and interrupted in time.
Pedro Bentancour Garin
Runtime safety, oversight, rollback, and control infrastructure for advanced AI in real-world, high-consequence environments.
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Johan Fredrikzon
Designing a Project Funding Proposal
AISA
Translating in-person convening to measurable outcomes
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)