Sankalp Gilda
Two co-authored workshop papers (LLM reasoning, agentic-AI accountability), presented April 2026 in Rio. Asking partial trip reimbursement.
Alex Kwon
If your reward model is an LLM, you cannot tell whether the policy is gaming the reward or actually getting better. We built a simulator instead.
Matthew A Cator
Funding the open-source launch of a working claim-state system and the local firewall bridge that carries verification before voice into governed agent action.
Kumari Neha Priya
Urgent funding needed by May 8 for graduate policy training focused on AI governance
Tom Bibby
Social media content across YouTube, Instagram, and TikTok to grow AI x-risk awareness and build political momentum for a global pause.
Developing enforceable architectural constraints, safety mechanisms, and certification criteria to keep advanced AI systems aligned and non-conscious
AI Understanding
Building the first browser-based digital laboratory for interactive AI Safety education and failure-mode discovery.
Modeling Cooperation
Software tools and research to quantify coordination failures and inform policy decisions.
Mu Zi
This round of funding will be used primarily for prototype hardening, artifact packaging, runtime evaluation, and preparation for external review.
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Ida-Emilia Kaukonen
A 15,000+ page corpus on long-term interaction, symbolic language, unusual model behavior, and safety edge cases.
Alex Hakuzimana
Africa's Voice in Global AI Safety
Dr Richard Armitage
A trusted profession that has advocated against existential risks like nuclear war can do so again for AI — but clinicians must first be made aware of the risks
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Gaetan Selle
This is a small grant buying a large increase in high-quality Francophone AI risk communication from a creator who has already a track record.
Sean Kwon
Open source agent monitoring tools to detect failures, infinite loops, and unsafe behavior in production AI systems
Remmelt Ellen
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Covering the article processing charge for an accepted Analysis paper (Open Access) calling for collaboration between public health and existential risk studies