Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Global catastrophic risks

11 proposals
45 active projects
$2.75M
Grants171Impact certificates8
RyanCel avatar

Ryan Celimon

Making AI Safety Understandable to Everyday Audiences

Translating AI’s biggest threats into videos anyone can understand: AGI, misalignment, and job loss explained.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $15K
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
2
0
$10K / $155K
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
2
$300K / $3.25M
🌳

Will Shin

Building an Educational Universe Through Animals and Longtermist Storytelling

A global IP project reimagining ecology and future technology and institutions through character-driven narratives.

Science & technologyAI governanceAnimal welfareBiosecurityEA communityGlobal catastrophic risks
1
0
$0 / $15K
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
1
0
$0 / $4K
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
majorjanyau avatar

Muhammad Ahmad

Building Frontier AI Governance Capacity in Africa (Pilot Phase)

A pilot to build policy and technical capacity for governing high-risk AI systems in Africa

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $50K
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
XyraSinclair avatar

Xyra Sinclair

ExoPriors, Inc. founder runway

building foundational subjective judgement infrastructure

Science & technologyTechnical AI safetyBiosecurityEA communityForecastingGlobal catastrophic risks
1
0
$0 / $2.5M
🍇

Jade Master

SDCPNs for AI Safety

Developing correct-by-construction world models for verification of frontier AI

Science & technologyTechnical AI safetyGlobal catastrophic risks
2
0
$39K raised
🍇

David Rozado

Disentangling Political Bias from Epistemic Integrity in AI Systems

An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems

Science & technologyTechnical AI safetyACX Grants 2025AI governanceForecastingGlobal catastrophic risks
1
1
$50K raised
orpheus avatar

Orpheus Lummis

Guaranteed Safe AI Seminars 2026

Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.

Technical AI safetyAI governanceGlobal catastrophic risks
5
3
$30K raised
rguerrschi avatar

Rufo Guerreschi

The Deal of the Century (for AI)

Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty

AI governanceGlobal catastrophic risks
3
5
$11.1K raised
dcarel8 avatar

David Carel

Clean Indoor Air for Schools

Accelerating the adoption of air filters in every classroom

Science & technologyACX Grants 2025BiomedicalBiosecurityGlobal catastrophic risksGlobal health & development
0
1
$150K raised
Leo-Hyams avatar

Leo Hyams

Fund a Fellow for the Cooperative AI Research Fellowship!

A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
8
4
$2.53K raised
🥭

Thane Ruthenis

Synthesizing Standalone World-Models

Research agenda aimed at developing methods for constructing powerful, easily interpretable world-models.

Science & technologyTechnical AI safetyGlobal catastrophic risks
3
6
$51.5K raised
adityaarpitha avatar

Aditya Arpitha Prasad

Groundless Alignment Residency 2025

Practicing Embodied Protocols that work with Live Interfaces

Science & technologyTechnical AI safetyEA communityGlobal catastrophic risks
4
3
$15K raised
Petr_Salaba avatar

Petr Salaba

Development of a Cautionary Tale Feature Film about Gradual Disempowerment

work title: Seductive Machines and Human Agency

AI governanceGlobal catastrophic risks
6
5
$100K raised
jlssapieninstitute avatar

Dr. Jacob Livingston Slosser

Do humans maintain independence in normative decision-making when using AI?

Help get the Sapien Institute off the ground

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 / $160K

Unfunded Projects

Lisa-Intel avatar

Pedro Bentancour Garin

Global Governance & Safety Layer for Advanced AI Systems - We Stop Rogue AI

Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 raised