Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

The AI Safety Research Fund

Technical AI safety
JaesonB avatar

Jaeson Booker

ProposalGrant
Closes July 31st, 2025
$100raised
$25,000minimum funding
$100,000funding goal

Offer to donate

37 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project Summary

The AI Safety Research Fund is a proposed 501(c)(3) nonprofit dedicated exclusively to advancing AI safety through targeted, transparent, and independent grantmaking. This Manifund project is intended to be a catalyst for securing initial donations so that we can setup operations and get fiscal sponsorship. Our goal is to fix critical gaps in the current funding landscape by fundraising outside of traditional EA and longtermist circles. Our operations will be guided by responsiveness, accessibility to average donors, and a commitment to broadening the AI safety ecosystem.

What are this project's goals? How will you achieve them?

Goals:

  1. Increase the total amount of funding going to AI safety.

  2. Streamline the process of applying for and distributing safety-focused grants.

  3. Provide early-stage support to promising but underfunded or unconventional projects.

  4. Expand the donor base beyond traditional Effective Altruism circles.

How we'll achieve them:

  • Build a fully operational grantmaking nonprofit with fiscal sponsorship for immediate 501(c)(3) status.

  • Run regular, predictable grant rounds with transparent criteria and guaranteed decision timelines.

  • Offer seed funding and small grants to new organizations and individuals, especially early-stage and experimental projects.

  • Engage a wide range of donors through accessible fundraising campaigns, ranging from $10/month contributions to major gifts.

  • Operate with full transparency, publishing updates and impact reports for donors and the public.

How will this funding be used?

Initial funding will go to paying a fiscal sponsor fee and setting up operations. Once we secure enough funding for that, funding will be used in three main categories:

  1. Grants to AI Safety Projects (~80%+): Funding individual researchers, organizations, and field-building efforts.

  2. Operations (~10%): Staffing (initially one full-time Fund Manager, with additional hires as we scale), grant processing, donor engagement, and reporting. Initial percentages for this might be larger, depending on staffing needs, before dropping down to around 10%.

  3. Fiscal Sponsorship Fees (~5–9%): Covering administrative overhead from a fiscal sponsor to provide nonprofit infrastructure, tax deductibility, and legal compliance.

Who is on your team? What's your track record on similar projects?

Fund Manager – Jaeson Booker

  • Background: Software engineer and AI safety researcher.

  • Role: Full-time operator overseeing all aspects of the Fund, including grantmaking, operations, and fundraising.

Fund Advisors 

Kabir Kumar

  • Founder of AI Plans

Seth Herd

  • Independent AI Safety Research

Jobst Heitzig

  • Project Lead of SatisfIA

What are the most likely causes and outcomes if this project fails?

Most likely causes of failure:

  • Failure to secure initial donor commitments to reach operational sustainability.

  • Inability to attract and retain qualified advisors or operations staff.

  • Lack of visibility or credibility in the broader AI safety ecosystem.

Most likely outcomes if it fails:

  • The AI safety field will continue to suffer from a lack of early-stage and independent funding.

  • Promising projects will go unfunded or dissolve prematurely.

  • Donors interested solely in AI safety will lack a dedicated giving vehicle, potentially reducing the total funding entering the field.

The Institute for AI Policy and Strategy released a report from AI researchers surveyed, and found the following bottlenecks:

  • Resources: While funding for AI Reliability & Safety has increased in recent years, it remains inadequate relative to the scale and urgency of the problem. 

  • Expertise: The field faces a shortage of researchers with the necessary technical skills. 

  • Uncertainty: There is considerable uncertainty about which research directions offer the most promise for risk reduction per unit of effort.

“Despite widespread awareness of this urgent challenge, significant gaps remain in coordinating and prioritizing technical AI reliability and security.”

 -Expert Survey: AI Reliability & Security Research Priorities

How much money have you raised in the last 12 months, and from where?

As of now, we are in the pre-launch fundraising phase. Our goal is to secure:

  • $25,000 in minimum pledged commitments to initiate operations.

  • $100,000 in total pledged support from anchor donors ($10,000–$25,000), founding supporters ($1,000–$5,000), and community members (recurring donations).

We are actively seeking these pledges and anticipate having a committed funding base before launching the first grant round in September 2025.

The Cost of Inaction

The counterfactual to this project is: nothing happens. This means less funding will be flowing into AI Safety, fewer alignment researchers will receive a salary, fewer new safety orgs will be created, and fewer prospective researchers will be trained and onboarded.

Time is not on our side. The safety community needs more scalable and responsive funding infrastructure. We're here to build it.

We recognize this initiative's critical, time-sensitive need. We invite you to join us. Our existence directly translates to increased funding, better compensated researchers, the creation of new safety organizations, and the training of future talent, all vital for a positive AI future.

Let's ensure the future of AI is safe, sane, and human-aligned.

Help us fund it.

Contact: contact@ai-safety-fund.org 

Interested in getting involved? Fill out our interest form: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform 

Want to wait until we get 501(c)(3) status to donate? You can still pledge a donation here: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform 

More details about the structure and plans for the fund: https://docs.google.com/document/d/12a9-WzdH_IDmaTYmVtk5n16syraKpLGlMdk9EriEMcY/edit?usp=sharing  

Website: https://www.ai-safety-fund.org/ 


Comments15Offers1Similar8
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
1
$100K raised
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
JaesonB avatar

Jaeson Booker

Jaeson's Independent Alignment Research and work on Accelerating Alignment

Collective intelligence systems, Mechanism Design, and Accelerating Alignment

Technical AI safety
2
0
$0 raised
🍋

Jonas Vollmer

AI forecasting and policy research by the AI 2027 team

AI Futures Project

AI governanceForecasting
7
9
$35.6K raised
Allisondman avatar

Allison Duettmann

Increasing the funding distributed by Foresight Insitute's AI safety grants

focused on 1. bci and wbe for safe ai, 2. cryptography and security for safe ai, and 3. safe multipolar ai

Science & technologyTechnical AI safetyAI governance
4
0
$0 raised
AngieNormandale avatar

Angie Normandale

Diversify Funding for AI Safety

Seeding a business which finds grants and High Net Worth Individuals beyond EA

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
5
7
$0 raised
Jonas avatar

Jonas Kgomo

AI Alignment Research Lab for Africa

AI Safety lab focusing on technical alignment and governance of AI in Africa and the Global South more broadly. We are a grassroots community-led research lab

Technical AI safety
4
12
$2.45K raised
AISGF avatar

AI Safety and Governance Fund

Testing and spreading messages to reduce AI x-risk

Educating the general public about AI and risks in most efficient ways and leveraging this to achieve good policy outcomes

AI governanceEA Community Choice
4
17
$12.6K raised