Aldan Creo
Putting explainability at the forefront of AI text detection
Sean Peters
An early-stage AI safety research group based in Sydney, Australia
Mejdi Sadriu
Self-directed transition into AI safety via hands-on interpretability research and open-source contributions
Pedro Bentancour Garin
I have been invited to this UN conference in Geneva, this application is for a grant to cover travel and living expenses.
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Kumari Neha Priya
Urgent funding needed by April 30 for graduate policy training in AI governance
Studying more human-like intelligence through constraint-aware, curiosity-driven agents on ARC-AGI-3
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Gen-Z-focused multimedia project that will raise awareness of AI safety and x-risk
Zaelani
18+ preprints across multiple fields, all written on a 2GB RAM phone. $600 removes the only thing standing between me and the next body of work.
Samuel Gélineau
Fine-tuning a coding model to bypass the difficulty of verifying attention layers
Sohan Venkatesh
Does CoT causally drive model outputs or is it a post-hoc rationalisation? Instead of asking if CoT looks faithful, we intervene on it and observe what happens.
Gaetan Selle
This is a small grant buying a large increase in high-quality Francophone AI risk communication from a creator who has already a track record.
Jonathan Elsworth Eicher
Linh Le
Rishub Jain
Sean Kwon
Open source agent monitoring tools to detect failures, infinite loops, and unsafe behavior in production AI systems
Dhruv Yadav
Auditing and improving LLM-as-a-judge systems via interpretable aggregation of preferences
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Ahmed dawoud
An advanced agent that perceives your screen and executes tasks by controlling the mouse, acting as a digital proxy to handle complex work on your behalf.