Zaelani
18+ preprints across multiple fields, all written on a 2GB RAM phone. $600 removes the only thing standing between me and the next body of work.
Dhruv Yadav
Auditing and improving LLM-as-a-judge systems via interpretable aggregation of preferences
Nicholas Kruus
LLMs could automate intelligence analysis. I wrote the first paper on governing this; $5k buys 2mo to revise it and scope an org research branch
Ahmed dawoud
An advanced agent that perceives your screen and executes tasks by controlling the mouse, acting as a digital proxy to handle complex work on your behalf.
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Lawrence Wagner
A benchmark for studying how failures spread across multi-agent AI systems and whether they can be detected and interrupted in time.
Wasim Gadwal
Observability and interpretability toolkit for world models in AI safety and mechanistic interpretability research.
sung hun kwag
An open-source safety pilot for detecting metric gaming, pseudo-improvement, and oversight evasion
Kateryna Morozovska
Important contributions in application of scientific AI to improve uncertainty quantification and advance future plasma physics research.
Haakon Huynh
Elliot McKernon
A shared framework, case studies, and decision tools to help policymakers and AISIs identify gaps, prioritize interventions, and coordinate AGI readiness.
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Theia Vogel
Research, tutorial writing, and open-source libraries & tools for experimenting with language models
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Justin Bianchini
A modular gene-editing platform for engineering new pigment patterns in ornamental plants, starting with a vein-pattern rescue line in petunias.
Aashka Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
aya samadzelkava
LLMs scale language, not method. HP turns hypothesis-driven papers into machine-readable maps of variables, controls, stats, and findings for researchers & AI.
Hayley Martin
Support my postgraduate law studies and research in AI Governance
Adam Boon
An executable reasoning quality framework that checks whether AI-generated arguments are logically sound — not just factually accurate. Live at usesophia.app.
Habeeb Abdulfatah
Seeking funding to secure API infrastructure and permanently eliminate the rate limits bottlenecking open-source EA grant evaluation.