Timotius A P Lagaunne
Phase 1
Nicholas Kruus
LLMs could automate intelligence analysis. I wrote the first paper on governing this; $5k buys 2mo to revise it and scope an org research branch
Lawrence Wagner
A benchmark for studying how failures spread across multi-agent AI systems and whether they can be detected and interrupted in time.
Wasim Gadwal
Observability and interpretability toolkit for world models in AI safety and mechanistic interpretability research.
sung hun kwag
An open-source safety pilot for detecting metric gaming, pseudo-improvement, and oversight evasion
Pu Wang (Jessica)
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Kateryna Morozovska
Elliot McKernon
A shared framework, case studies, and decision tools to help policymakers and AISIs identify gaps, prioritize interventions, and coordinate AGI readiness.
Haakon Huynh
aya samadzelkava
LLMs scale language, not method. HP turns hypothesis-driven papers into machine-readable maps of variables, controls, stats, and findings for researchers & AI.
Adam Boon
An executable reasoning quality framework that checks whether AI-generated arguments are logically sound — not just factually accurate. Live at usesophia.app.
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Hayley Martin
Support my postgraduate law studies and research in AI Governance
Gergely Máté
An Interactive Tool for Navigating AI Career Risk
Theia Vogel
Research, tutorial writing, and open-source libraries & tools for experimenting with language models
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Justin Bianchini
A modular gene-editing platform for engineering new pigment patterns in ornamental plants, starting with a vein-pattern rescue line in petunias.
Aashka Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Habeeb Abdulfatah
Seeking funding to secure API infrastructure and permanently eliminate the rate limits bottlenecking open-source EA grant evaluation.
Matthew Farr
I self-funded research into a new threat model. It is demonstrating impact (accepted at multiple venues, added to BlueDot's curriculum).