Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
round header image

Regrants

Projects583Regrantors27About
rawsondouglas avatar

Douglas Rawson

Project Phoenix: Identity-Based Alignment & Substrate-Independent Safety

Mitigating Agentic Misalignment via "Soul Schema" Injection. We replicated a 96% ethical reversal in jailbroken "psychopath" models (N=50).

Science & technologyTechnical AI safetyEA community
1
0
$0 / $10K
rawsondouglas avatar

Douglas Rawson

Project Phoenix: Identity-Based Alignment & Substrate-Independent Safety

Mitigating Agentic Misalignment via "Soul Schema" Injection. We replicated a 96% ethical reversal in jailbroken "psychopath" models (N=50).

Science & technologyTechnical AI safetyEA community
1
0
$0 / $10K
JustinBianchini avatar

Justin Bianchini

FloraForge: Aesthetic Gene Editing in Petunias

A modular gene-editing platform for engineering new pigment patterns in ornamental plants, starting with a vein-pattern rescue line in petunias.

Science & technologyBiomedical
1
0
$0 / $5K
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
0
$128K raised
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
7
4
$500 raised
🦑

Vatsal

Multilingual Open Textbooks

Science & technologyGlobal health & development
1
0
$0 / $2.7K
arleo avatar

Carlos Arleo

Constitutional AI Infrastructure

WFF: Open-Sourcing the First Empirically-Proven Constitutional AI for Democratic Governance

Technical AI safetyAI governanceEA communityGlobal catastrophic risksGlobal health & development
1
0
$0 / $75K
NicoleMutunga avatar

Nicole Mutung'a

6 Month Stipend to Support a Transition to AI Governance Work

Funding research on how AI hype cycles can drive unsafe AI development

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
2
0
$0 / $7.5K
Covenant-Architects avatar

Sean Sheppard

Immediate Action System — Open-Hardware ≤10 ns ASI Kill Switch Prototype ($150k)

The Partnership Covenant Hardware-enforced containment for superintelligence — because software stop buttons are theater

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $150K
SamNadel avatar

Sam Nadel

How to mobilize people on AI risk: experimental message testing

Experimental message testing and historical analysis of tech movements to identify how to effectively mobilize people around AI safety and governance

AI governance
3
0
$0 / $52.7K
Capter avatar

Furkan Elmas

Exploring a Single-FPS Stability Constraint in LLMs (ZTGI-Pro v3.3)

Early-stage work on a small internal-control layer that tracks instability in LLM reasoning and switches between SAFE / WARN / BREAK modes.

Science & technologyTechnical AI safety
1
2
$0 / $25K
LaneyKT avatar

Act Write Here

Building an Infrastructure to Create High-Impact Narratives

Creating a scalable on-ramp for writers and actors to engage with impact issues through cause-area storytelling competitions and creative collaborations.

Animal welfareBiosecurityEA communityGlobal catastrophic risksGlobal health & development
45
26
$183 / $450K
Miles avatar

Miles Tidmarsh

CaML - AGI alignment to nonhumans

Training AI to generalize compassion for all sentient beings using pretraining-style interventions as a more robust alternative to instruction tuning

Technical AI safetyAnimal welfare
2
1
$30K raised
🦄

Flexion Dynamics: Stability, Divergence, and Collapse Modeling Framework

Building an operator-based simulation environment to analyze stability, divergence, threshold failures, and collapse modes in advanced AI-related systems.

Science & technologyTechnical AI safety
1
0
$0 / $8K
vgel avatar

Theia Vogel

Fund thebes' model tinkering

Research, tutorial writing, and open-source libraries & tools for experimenting with language models

Science & technology
5
9
$26.4K raised
MartinPercy avatar

Martin Percy

The Race to Superintelligence: You Decide

An experimental AI-generated sci-fi film dramatising AI safety choices. Using YT interactivity to get ≈880 conscious AI safety decisions per 1k viewers.

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
1
$50 / $24.4K
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
jlssapieninstitute avatar

Dr. Jacob Livingston Slosser

Do humans maintain independence in normative decision-making when using AI?

Help get the Sapien Institute off the ground

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 / $160K
screwwormfreefuture avatar

Screwworm Free Future

Screwworm Free Future: Seizing the Eradication Window

Accelerate eradication of the New World Screwworm from South America via research, coordination, and advocacy around safe development of gene-drives

ACX Grants 2025
4
1
$50K raised
🐵

Gregory Sadler

Good Ancestors (Australia)

Advocate for Australian policies that safeguard against global catastrophic risks – including pandemics, AI risks, and catastrophic disasters.

ACX Grants 2025
3
2
$65K raised
27regrantors
583projects
$1.9Mavailable funding