Project Summary
The AI Safety Research Fund is a proposed 501(c)(3) nonprofit dedicated exclusively to advancing AI safety through targeted, transparent, and independent grantmaking. We have already been accepted for fiscal sponsorship by Anti Entropy, a 501(c)(3), contingent on us raising initial funding. This Manifund project is intended to be a catalyst for securing initial donations so that we can setup operations and get fiscal sponsorship. Our goal is to fix critical gaps in the current funding landscape by fundraising outside of traditional EA and longtermist circles. Our operations will be guided by responsiveness, accessibility to average donors, and a commitment to broadening the AI safety ecosystem.
What are this project's goals? How will you achieve them?
Goals:
Increase the total amount of funding going to AI safety.
Streamline the process of applying for and distributing safety-focused grants.
Provide early-stage support to promising but underfunded or unconventional projects.
Expand the donor base beyond traditional Effective Altruism circles.
How we'll achieve them:
Build a fully operational grantmaking nonprofit with fiscal sponsorship for immediate 501(c)(3) status.
Run regular, predictable grant rounds with transparent criteria and guaranteed decision timelines.
Offer seed funding and small grants to new organizations and individuals, especially early-stage and experimental projects.
Engage a wide range of donors through accessible fundraising campaigns, ranging from $10/month contributions to major gifts.
Operate with full transparency, publishing updates and impact reports for donors and the public.
How will this funding be used?
Initial funding will go to paying a fiscal sponsor fee and setting up operations. Once we secure enough funding for that, funding will be used in three main categories:
Grants to AI Safety Projects (~80%+): Funding individual researchers, organizations, and field-building efforts.
Operations (~10%): Staffing (initially one full-time Fund Manager, with additional hires as we scale), grant processing, donor engagement, and reporting. Initial percentages for this might be larger, depending on staffing needs, before dropping down to around 10%.
Fiscal Sponsorship Fees (~5–9%): Covering administrative overhead from a fiscal sponsor to provide nonprofit infrastructure, tax deductibility, and legal compliance.
Who is on your team? What's your track record on similar projects?
Fund Manager – Jaeson Booker
Fund Advisors
Kabir Kumar
Seth Herd
Jobst Heitzig
What are the most likely causes and outcomes if this project fails?
Most likely causes of failure:
Failure to secure initial donor commitments to reach operational sustainability.
Inability to attract and retain qualified advisors or operations staff.
Lack of visibility or credibility in the broader AI safety ecosystem.
Most likely outcomes if it fails:
The AI safety field will continue to suffer from a lack of early-stage and independent funding.
Promising projects will go unfunded or dissolve prematurely.
Donors interested solely in AI safety will lack a dedicated giving vehicle, potentially reducing the total funding entering the field.
The Institute for AI Policy and Strategy released a report from AI researchers surveyed, and found the following bottlenecks:
Resources: While funding for AI Reliability & Safety has increased in recent years, it remains inadequate relative to the scale and urgency of the problem.
Expertise: The field faces a shortage of researchers with the necessary technical skills.
Uncertainty: There is considerable uncertainty about which research directions offer the most promise for risk reduction per unit of effort.
“Despite widespread awareness of this urgent challenge, significant gaps remain in coordinating and prioritizing technical AI reliability and security.”
-Expert Survey: AI Reliability & Security Research Priorities
How much money have you raised in the last 12 months, and from where?
As of now, we are in the pre-launch fundraising phase. Our goal is to secure:
$25,000 in minimum pledged commitments to initiate operations.
$100,000 in total pledged support from anchor donors ($10,000–$25,000), founding supporters ($1,000–$5,000), and community members (recurring donations).
We are actively seeking these pledges and anticipate having a committed funding base before launching the first grant round in September 2025.
The Path Forward
We will seek to mitigate potential failure modes, and foresee future bottlenecks before they manifest, using previous organizations as datapoints.
Operational bottlenecks: The AI Safety Research Fund will strive to overcome operational bottlenecks. One is that it will not set hard constraints on how much of the funding is dedicated toward it, and by not having a separate fund for paying staff. Another is to make the fund as modular as possible, with Grant Managers, once we reach a certain scale, being delegated to their own subfields. They would be accountable for the applications for that subfield, issuing their own recommendations for approval, with a projected portion of the funds delegated toward that subfield.
Spinoff from fiscal sponsor: As we spinoff from our fiscal sponsor, the Board of Advisors will be transitioned to the Board of Trustees. This way, the Trustees will be individuals who have worked with the fund for 12-24 months, so we will have a good understanding of their commitment and their trustability.
Separate fundraising and grant cycles. We will separate our different endeavors to different times, so that they can each get our undivided attention. This begins with a time dedicated to fundraising. Once the fundraising round concludes, it will be followed by a time dedicated toward strategic goals given the amount of funding. Then there will be a grant round based on that strategy, where the focus will be on getting high-value applicants. This might include giving rewards to those who discover promising applicants. And finally a decision round, where we will decide who gets funded.
Applications: We want to make the applications as time economic for all parties involved as possible. This starts with a quick grant application. The goal of the first application is to filter-out the easy-to-determine applicants who we would know from little information would not be a priority for funding. This way, we are not wasting their time either. The next application would require more details, and take more time to fill-out, but these would be applicants we would seriously consider funding. After a certain scale for our organization, we will separate this second stage into two options: one for smaller grants, such as those under 10k, which would not need as many details. And one option for larger grants, which would need more scrutiny and more details. It’s not a one-size-fits-all, and we’ll finetune this process as we go along.
Fundraising initiatives: We plan to try a variety of different fundraising tactics, see which ones bare fruit, and then scale them to larger campaigns. This can range from social media campaigns, grassroots organization, targeting high-income donors, leveraging already existing networks, academic circles, and fundraising events. Another goal is for us to be ready for a potential sudden increase in funding AI Safety projects that we will quickly be able to capitalize on.
The Cost of Inaction
The counterfactual to this project is: nothing happens. This means less funding will be flowing into AI Safety, fewer alignment researchers will receive a salary, fewer new safety orgs will be created, and fewer prospective researchers will be trained and onboarded.
Time is not on our side. The safety community needs more scalable and responsive funding infrastructure. We're here to build it.
We recognize this initiative's critical, time-sensitive need. We invite you to join us. Our existence directly translates to increased funding, better compensated researchers, the creation of new safety organizations, and the training of future talent, all vital for a positive AI future.
Let's ensure the future of AI is safe, sane, and human-aligned.
Help us fund it.
Contact: contact@ai-safety-fund.org
Interested in getting involved? Fill out our interest form: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform
Want to wait until we get 501(c)(3) status to donate? You can still pledge a donation here: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform
More details about the structure and plans for the fund: https://docs.google.com/document/d/12a9-WzdH_IDmaTYmVtk5n16syraKpLGlMdk9EriEMcY/edit?usp=sharing
Website: https://www.ai-safety-fund.org/