Matthew Farr
I self-funded research into a new threat model. It is demonstrating impact (accepted at multiple venues, added to BlueDot's curriculum).
Cameron Tice
AISA
Translating in-person convening to measurable outcomes
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Tom Maltby
A Three-Month Falsification First Evaluation of CREATE
Remmelt Ellen
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Mercy Kyalo
Operational costs for AISEA
Anthony Etim
Defense-first monitoring and containment to reduce catastrophic AI risk from stolen frontier model weights
Larry Arnold
A modular red-teaming and risk-evaluation framework for LLM safety
MANRAJ SINGH
Exploring ways of Benchmarking that do not get saturated over time
Vahit FERYAD
Build an agentic LLM+VLM pipeline that generates product visuals and automatically verifies identity, color, and artifacts, enabling scalable, trustworthy e-com
Cefiyana
Developing an Edge-AI framework to reduce response latency to <0.6s, mitigating user cognitive stress and establishing "Digital Pharmacotherapy" standards.
Adithyan Madhu
Building an open-source protocol to ensure that digital luck and high-stakes selections are provably fair, transparent, and beyond the reach of centralized bias
Jacob Steinhardt
Boyd Kane
by buying gift cards for the game and handing them out at the OpenAI offices
Krishna Patel
Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models
Lawrence Wagner
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.