Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Global catastrophic risks

26 proposals
46 active projects
$3.02M
Grants193Impact certificates8
aashkapatel avatar

Aashkaben Kalpesh Patel

Help a Bootstrapped AI Risk Literacy Founder Get To IASEAI 2026 in Paris

Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
3
3
$75 / $3.04K
🐠

Tomasz Kiliańczyk

"Emergent Depopulation": Translation and Publication

Translating an AI safety report (1k+ downloads) for peer-reviewed publication to formalize "Emergent Depopulation" as a novel systemic risk.

Technical AI safetyAI governanceEA communityForecastingGlobal catastrophic risks
1
0
$0 / $2.5K
jessicapwang avatar

Jessica P. Wang

Safe AI Germany (SAIGE)

Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
3
1
$0 / $285K
TheDapreInititive avatar

Robert Craft

Project Title: The DAPRE Initiative - Research Foundation

Grant Request: £40,000 (Seed Funding) Duration: 6 Months

Science & technologyForecastingGlobal catastrophic risksGlobal health & development
1
0
$0 / $55K
Mateusz-Bagiski avatar

Mateusz Bagiński

Ambitious AI Alignment Seminar

One Month to Study, Explain, and Try to Solve Superintelligence Alignment

Technical AI safetyGlobal catastrophic risks
10
12
$5.5K / $180K
JessHines- avatar

Jess Hines (Fingerprint Content)

Department of Future Listening: Narrative Risk Radar (UK pilot)

Detect polarising story-frames early and build better narratives—fast, practical, adoptable.

AI governanceForecastingGlobal catastrophic risksGlobal health & development
1
0
$0 / $300K
DavidKrueger avatar

David Krueger

Evitable: a new public-facing AI risk non-profit

Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.

AI governanceGlobal catastrophic risks
5
3
$5.28K / $1.5M
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
3
$365K / $3.25M
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
2
3
$11K raised
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
7
1
$0 / $4K
TheaTERRA-Productions-Society avatar

Sara Holt

Paper Clip Apocalypse (War Horse Machine)

Short Documentary and Music Video

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $40K
🐰

Avinash A

Terminal Boundary Systems and the Limits of Self-Explanation

Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
1
$0 / $30K
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
Melina-M-Lima avatar

Melina Moreira Campos Lima

Public Food Procurement as a Climate Policy Tool in the EU

Assessing the Climate Potential of Catering Systems in Public Schools and Hospitals

Science & technologyAnimal welfareGlobal catastrophic risks
1
0
$0 / $30K
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
🍇

Jade Master

SDCPNs for AI Safety

Developing correct-by-construction world models for verification of frontier AI

Science & technologyTechnical AI safetyGlobal catastrophic risks
2
0
$39K raised
Lycheetah avatar

Mackenzie Conor James Clark

AURA Protocol: Measurable Alignment for Autonomous AI Systems

An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $75K

Unfunded Projects

anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
2
1
$0 raised