jason lee harvey
Establishing a sovereign, decentralized Sentry node to audit frontier AI agents for logic escapes ($E_{escape}$) using the Inverted Social Drift ($SD$) metric.
Aashka Patel
Redirecting India’s Middle‑Schoolers into AI Safety, Governance, and X‑Risk Work
Nicholas Kruus
LLMs could automate intelligence analysis. I wrote the first paper on governing this; $5k buys 2mo to revise it and scope an org research branch
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Pedro Bentancour Garin
Runtime safety, oversight, rollback, and control infrastructure for advanced AI in real-world, high-consequence environments.
Wasim Gadwal
Observability and interpretability toolkit for world models in AI safety and mechanistic interpretability research.
Johan Fredrikzon
Designing a Project Funding Proposal
Suki Krishna
Investigate how LLMs behave in multi-agent environments particularly how contextual framing and strategic advice can systematically manipulate coord. outcomes
Remmelt Ellen
Elliot McKernon
A shared framework, case studies, and decision tools to help policymakers and AISIs identify gaps, prioritize interventions, and coordinate AGI readiness.
AISA
Translating in-person convening to measurable outcomes
Haakon Huynh
AI Understanding
Lindsay Langenhoven
Help cover the costs of creating an in-depth article about the impact of mass biometric surveillance in the age of AI.
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Jacob Steinhardt
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Mahmud Omar
An open platform to stress-test how LLMs handle bias, pressure points, and clinical decisions. Built on peer reviewed real evidence.
aya samadzelkava
LLMs scale language, not method. HP turns hypothesis-driven papers into machine-readable maps of variables, controls, stats, and findings for researchers & AI.