LEE SEULKI
Networking and sharing the AIO Integrity Index (11,340 scenarios) with global policymakers to address "Integrity Hallucination" in LLMs.
Griffin Walters
AI completes to 100% by default. Our middleware makes that impossible. Human judgment is required at critical decision points enforced by schema
Mohamed Elrashid
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Tony Rost
Resources for journalists, clinicians, and educators before AI consciousness debates calcify.
AISA
Translating in-person convening to measurable outcomes
Haakon Huynh
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Remmelt Ellen
Jacob Steinhardt
Tom Maltby
A Three-Month Falsification First Evaluation of CREATE
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Mercy Kyalo
Operational costs for AISEA
Lawrence Wagner
Larry Arnold
A modular red-teaming and risk-evaluation framework for LLM safety
MANRAJ SINGH
Exploring ways of Benchmarking that do not get saturated over time
Vahit FERYAD
Build an agentic LLM+VLM pipeline that generates product visuals and automatically verifies identity, color, and artifacts, enabling scalable, trustworthy e-com
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.