Arifa Khan
The Reputation Circulation Standard - Implementation Sprint
Petr Salaba
work title: Seductive Machines and Human Agency
Davit Jintcharadze
The Unjournal is making research better by evaluating what really matters. We aim to make rigorous research more impactful and impactful research more rigorous.
Kristina Vaia
The official AI safety community in Los Angeles
Chi Nguyen
Making sure AI systems don't mess up acausal interactions
Shep Riley
Running an EA and AIS group, connecting participants to high impact orgs
Asterisk Magazine
Connor Axiotes
Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed
Centre pour la Sécurité de l'IA
4M+ views on AI safety: Help us replicate and scale this success with more creators
Tsvi Benson-Tilsen
accelerating strong human germline genomic engineering to make lots of geniuses
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Yuanyuan Sun
Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.
John Sherman
Funding For Humanity: An AI Risk Podcast
Tyler John
Jai Dhyani
Developing AI Control for Immediate Real-World Use
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Francisco Carvalho
The nooscope will deliver public tools to map how ideas spread, starting with psyop detection, within 18 months
Peter Gebauer
Creating a contest for Robust, Detailed Proposals and Redteaming of AI Safety Plans: Fast Action for Safe Transformative AI
Amritanshu Prasad
Funding for my work across AI governance and policy research
Distilling AI safety research into a complete learning ecosystem: textbook, courses, guides, videos, and more.