Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Global catastrophic risks

15 proposals
46 active projects
$2.91M
Grants179Impact certificates8
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
QGResearch avatar

Ella Wei

Technical Implementation of the Tiered Invariants AI Governance Architecture

Achieving major reductions in code complexity and compute overhead while improving transparency and reducing deceptive model behavior

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
DavidKrueger avatar

David Krueger

Evitable: a new public-facing AI risk non-profit

Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.

AI governanceGlobal catastrophic risks
4
3
$5.28K / $1.5M
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
2
$315K / $3.25M
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
3
1
$10K raised
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
5
0
$0 / $4K
XyraSinclair avatar

Xyra Sinclair

SOTA Public Research Database + Search Tool

Unlocking the paradigm of agents + SQL + compositional vector search

Science & technologyTechnical AI safetyBiomedicalAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $20.7K
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $23.5K
🌳

Will Shin

Building an Educational Universe Through Animals and Longtermist Storytelling

A global IP project reimagining ecology and future technology and institutions through character-driven narratives.

Science & technologyAI governanceAnimal welfareBiosecurityEA communityGlobal catastrophic risks
1
0
$0 / $15K
🍇

Jade Master

SDCPNs for AI Safety

Developing correct-by-construction world models for verification of frontier AI

Science & technologyTechnical AI safetyGlobal catastrophic risks
2
0
$39K raised
🍇

David Rozado

Disentangling Political Bias from Epistemic Integrity in AI Systems

An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems

Science & technologyTechnical AI safetyACX Grants 2025AI governanceForecastingGlobal catastrophic risks
1
1
$50K raised
majorjanyau avatar

Muhammad Ahmad

Building Frontier AI Governance Capacity in Africa (Pilot Phase)

A pilot to build policy and technical capacity for governing high-risk AI systems in Africa

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $50K
orpheus avatar

Orpheus Lummis

Guaranteed Safe AI Seminars 2026

Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.

Technical AI safetyAI governanceGlobal catastrophic risks
5
3
$30K raised
rguerrschi avatar

Rufo Guerreschi

The Deal of the Century (for AI)

Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty

AI governanceGlobal catastrophic risks
3
6
$11.1K raised