Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Technical AI safety

23 proposals
84 active projects
$5.35M
Grants244Impact certificates20
jmedeirosdafonseca avatar

João Medeiros da Fonseca

Elo Clínico:

Phenomenological Fine-tuning for Medical AI Alignment

Science & technologyTechnical AI safetyGlobal health & development
1
0
$0 / $50K
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
2
0
$10K / $155K
Krishna-Patel avatar

Krishna Patel

Isolating CBRN Knowledge in LLMs for Safety - Phase 2 (Research)

Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models

Technical AI safetyBiomedicalBiosecurity
2
3
$26.6K / $150K
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
1
0
$0 / $4K
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
majorjanyau avatar

Muhammad Ahmad

Building Frontier AI Governance Capacity in Africa (Pilot Phase)

A pilot to build policy and technical capacity for governing high-risk AI systems in Africa

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $50K
seanpetersau avatar

Sean Peters

Evaluating Model Attack Selection and Offensive Cyber Horizons

Measuring attack selection as an emergent capability, and extending offensive cyber time horizons to newer models and benchmarks

Technical AI safety
2
2
$41K / $41K
sandguine avatar

Sandy Tanwisuth

Alignment as epistemic system governance under compression

We reframe the alignment problem as the problem of governing meaning and intent when they cannot be fully expressed.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $20K
whitfillp avatar

Parker Whitfill

Course Buyouts to Work on AI Forecasting, Evals

Technical AI safetyForecasting
3
2
$38K / $76K
Brian-McCallion avatar

Brian McCallion

Boundary-Mediated Models of LLM Hallucination and Alignment

A mechanistic, testable framework explaining LLM failure modes via boundary writes and attractor dynamics

Technical AI safetyAI governance
1
0
$0 / $75K
ChristopherKuntz avatar

Christopher Kuntz

Protocol-Level Interaction Risk Assessment and Mitigation (UIVP)

A bounded protocol audit and implementation-ready mitigation for intent ambiguity and escalation in deployed LLM systems.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $5K
JasrajBudigam avatar

Jasraj Hari Krishna Budigam

TimeAlign v2: contamination-aware evals for small models (16GB GPUs)

Reusable, low-compute benchmarking that detects data leakage, outputs “contamination cards,” and improves calibration reporting.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $46K
CeSIA avatar

Centre pour la Sécurité de l'IA

From Nobel Signatures to Binding Red Lines: The 2026 Diplomatic Sprint

Leveraging 12 Nobel signatories to harmonize lab safety thresholds and secure an international agreement during the 2026 diplomatic window.

Technical AI safetyAI governance
6
0
$0 / $400K
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
1
$128K raised
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
🍉

L

Visa fee support for Australian researcher to join a fellowship with Anthropic

Technical AI safety
1
0
$4K raised
XyraSinclair avatar

Xyra Sinclair

ExoPriors, Inc. founder runway

building foundational subjective judgement infrastructure

Science & technologyTechnical AI safetyBiosecurityEA communityForecastingGlobal catastrophic risks
1
0
$0 / $2.5M
Miles avatar

Miles Tidmarsh

CaML - AGI alignment to nonhumans

Training AI to generalize compassion for all sentient beings using pretraining-style interventions as a more robust alternative to instruction tuning

Technical AI safetyAnimal welfare
1
1
$30K raised
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
🍇

Jade Master

SDCPNs for AI Safety

Developing correct-by-construction world models for verification of frontier AI

Science & technologyTechnical AI safetyGlobal catastrophic risks
2
0
$39K raised