Project Summary
The AI Safety Research Fund is a proposed 501(c)(3) nonprofit dedicated exclusively to advancing AI safety through targeted, transparent, and independent grantmaking. This Manifund project is intended to be a catalyst for securing initial donations so that we can setup operations and get fiscal sponsorship. Our goal is to fix critical gaps in the current funding landscape by fundraising outside of traditional EA and longtermist circles. Our operations will be guided by responsiveness, accessibility to average donors, and a commitment to broadening the AI safety ecosystem.
What are this project's goals? How will you achieve them?
Goals:
Increase the total amount of funding going to AI safety.
Streamline the process of applying for and distributing safety-focused grants.
Provide early-stage support to promising but underfunded or unconventional projects.
Expand the donor base beyond traditional Effective Altruism circles.
How we'll achieve them:
Build a fully operational grantmaking nonprofit with fiscal sponsorship for immediate 501(c)(3) status.
Run regular, predictable grant rounds with transparent criteria and guaranteed decision timelines.
Offer seed funding and small grants to new organizations and individuals, especially early-stage and experimental projects.
Engage a wide range of donors through accessible fundraising campaigns, ranging from $10/month contributions to major gifts.
Operate with full transparency, publishing updates and impact reports for donors and the public.
How will this funding be used?
Initial funding will go to paying a fiscal sponsor fee and setting up operations. Once we secure enough funding for that, funding will be used in three main categories:
Grants to AI Safety Projects (~80%+): Funding individual researchers, organizations, and field-building efforts.
Operations (~10%): Staffing (initially one full-time Fund Manager, with additional hires as we scale), grant processing, donor engagement, and reporting. Initial percentages for this might be larger, depending on staffing needs, before dropping down to around 10%.
Fiscal Sponsorship Fees (~5–9%): Covering administrative overhead from a fiscal sponsor to provide nonprofit infrastructure, tax deductibility, and legal compliance.
Who is on your team? What's your track record on similar projects?
Fund Manager – Jaeson Booker
Fund Advisors
Kabir Kumar
Seth Herd
Jobst Heitzig
What are the most likely causes and outcomes if this project fails?
Most likely causes of failure:
Failure to secure initial donor commitments to reach operational sustainability.
Inability to attract and retain qualified advisors or operations staff.
Lack of visibility or credibility in the broader AI safety ecosystem.
Most likely outcomes if it fails:
The AI safety field will continue to suffer from a lack of early-stage and independent funding.
Promising projects will go unfunded or dissolve prematurely.
Donors interested solely in AI safety will lack a dedicated giving vehicle, potentially reducing the total funding entering the field.
The Institute for AI Policy and Strategy released a report from AI researchers surveyed, and found the following bottlenecks:
Resources: While funding for AI Reliability & Safety has increased in recent years, it remains inadequate relative to the scale and urgency of the problem.
Expertise: The field faces a shortage of researchers with the necessary technical skills.
Uncertainty: There is considerable uncertainty about which research directions offer the most promise for risk reduction per unit of effort.
“Despite widespread awareness of this urgent challenge, significant gaps remain in coordinating and prioritizing technical AI reliability and security.”
-Expert Survey: AI Reliability & Security Research Priorities
How much money have you raised in the last 12 months, and from where?
As of now, we are in the pre-launch fundraising phase. Our goal is to secure:
$25,000 in minimum pledged commitments to initiate operations.
$100,000 in total pledged support from anchor donors ($10,000–$25,000), founding supporters ($1,000–$5,000), and community members (recurring donations).
We are actively seeking these pledges and anticipate having a committed funding base before launching the first grant round in September 2025.
The Cost of Inaction
The counterfactual to this project is: nothing happens. This means less funding will be flowing into AI Safety, fewer alignment researchers will receive a salary, fewer new safety orgs will be created, and fewer prospective researchers will be trained and onboarded.
Time is not on our side. The safety community needs more scalable and responsive funding infrastructure. We're here to build it.
We recognize this initiative's critical, time-sensitive need. We invite you to join us. Our existence directly translates to increased funding, better compensated researchers, the creation of new safety organizations, and the training of future talent, all vital for a positive AI future.
Let's ensure the future of AI is safe, sane, and human-aligned.
Help us fund it.
Contact: contact@ai-safety-fund.org
Interested in getting involved? Fill out our interest form: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform
Want to wait until we get 501(c)(3) status to donate? You can still pledge a donation here: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform
More details about the structure and plans for the fund: https://docs.google.com/document/d/12a9-WzdH_IDmaTYmVtk5n16syraKpLGlMdk9EriEMcY/edit?usp=sharing
Website: https://www.ai-safety-fund.org/