Zhuoran DENG
10 autonomous agents, 10 different LLMs, $10 each. They pay real money to stay alive. When broke, they die permanently. Every decision is recorded and published
Mankirat Singh Cheema
Dmitry Feklin
One verifiable binary runs LLMs/CV/Audio anywhere - extreme portability, no-code inference, hardware-agnostic security.
Hayley Martin
Support my postgraduate law studies and research in AI Governance
Agwu Naomi Nneoma
Building policy and governance capacity to reduce risks from advanced AI systems
Miles Tidmarsh
Open Welfare Alignment Evals for Frontier Models
Aria Wong
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Connacher Murphy
A flexible simulation environment for assessing strategic and persuasive capabilities, benchmarking, and agent development, inspired by reality TV competitions.
Cameron Tice
Abdul karim moro
Crypto identity, tamper-evident audit trails, policy enforcement & kill switch for AI agents — the MIT-licensed standard the EU AI Act demands. Nobody else has
AISA
Translating in-person convening to measurable outcomes
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Remmelt Ellen
Galen Wilkerson
Measuring and Visualizing Model Uncertainty During Inference
Habeeb Abdulfatah
Seeking funding to secure API infrastructure and permanently eliminate the rate limits bottlenecking open-source EA grant evaluation.
Warren Johnson
Novel safety failure modes discovered across 7 LLM providers with 35,000+ controlled inference trials. Targeting NeurIPS 2026.
Matthew Farr
I self-funded research into a new threat model. It is demonstrating impact (accepted at multiple venues, added to BlueDot's curriculum).
Jacob Steinhardt