Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

The AI Safety Research Fund

Technical AI safety
JaesonB avatar

Jaeson Booker

ProposalGrant
Closes July 31st, 2025
$100raised
$25,000minimum funding
$100,000funding goal

Offer to donate

37 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project Summary

The AI Safety Research Fund is a proposed 501(c)(3) nonprofit dedicated exclusively to advancing AI safety through targeted, transparent, and independent grantmaking. This Manifund project is intended to be a catalyst for securing initial donations so that we can setup operations and get fiscal sponsorship. Our goal is to fix critical gaps in the current funding landscape by fundraising outside of traditional EA and longtermist circles. Our operations will be guided by responsiveness, accessibility to average donors, and a commitment to broadening the AI safety ecosystem.

What are this project's goals? How will you achieve them?

Goals:

  1. Increase the total amount of funding going to AI safety.

  2. Streamline the process of applying for and distributing safety-focused grants.

  3. Provide early-stage support to promising but underfunded or unconventional projects.

  4. Expand the donor base beyond traditional Effective Altruism circles.

How we'll achieve them:

  • Build a fully operational grantmaking nonprofit with fiscal sponsorship for immediate 501(c)(3) status.

  • Run regular, predictable grant rounds with transparent criteria and guaranteed decision timelines.

  • Offer seed funding and small grants to new organizations and individuals, especially early-stage and experimental projects.

  • Engage a wide range of donors through accessible fundraising campaigns, ranging from $10/month contributions to major gifts.

  • Operate with full transparency, publishing updates and impact reports for donors and the public.

How will this funding be used?

Initial funding will go to paying a fiscal sponsor fee and setting up operations. Once we secure enough funding for that, funding will be used in three main categories:

  1. Grants to AI Safety Projects (~80%+): Funding individual researchers, organizations, and field-building efforts.

  2. Operations (~10%): Staffing (initially one full-time Fund Manager, with additional hires as we scale), grant processing, donor engagement, and reporting. Initial percentages for this might be larger, depending on staffing needs, before dropping down to around 10%.

  3. Fiscal Sponsorship Fees (~5–9%): Covering administrative overhead from a fiscal sponsor to provide nonprofit infrastructure, tax deductibility, and legal compliance.

Who is on your team? What's your track record on similar projects?

Fund Manager – Jaeson Booker

  • Background: Software engineer and AI safety researcher.

  • Role: Full-time operator overseeing all aspects of the Fund, including grantmaking, operations, and fundraising.

Fund Advisors 

Kabir Kumar

  • Founder of AI Plans

Seth Herd

  • Independent AI Safety Research

Jobst Heitzig

  • Project Lead of SatisfIA

What are the most likely causes and outcomes if this project fails?

Most likely causes of failure:

  • Failure to secure initial donor commitments to reach operational sustainability.

  • Inability to attract and retain qualified advisors or operations staff.

  • Lack of visibility or credibility in the broader AI safety ecosystem.

Most likely outcomes if it fails:

  • The AI safety field will continue to suffer from a lack of early-stage and independent funding.

  • Promising projects will go unfunded or dissolve prematurely.

  • Donors interested solely in AI safety will lack a dedicated giving vehicle, potentially reducing the total funding entering the field.

The Institute for AI Policy and Strategy released a report from AI researchers surveyed, and found the following bottlenecks:

  • Resources: While funding for AI Reliability & Safety has increased in recent years, it remains inadequate relative to the scale and urgency of the problem. 

  • Expertise: The field faces a shortage of researchers with the necessary technical skills. 

  • Uncertainty: There is considerable uncertainty about which research directions offer the most promise for risk reduction per unit of effort.

“Despite widespread awareness of this urgent challenge, significant gaps remain in coordinating and prioritizing technical AI reliability and security.”

 -Expert Survey: AI Reliability & Security Research Priorities

How much money have you raised in the last 12 months, and from where?

As of now, we are in the pre-launch fundraising phase. Our goal is to secure:

  • $25,000 in minimum pledged commitments to initiate operations.

  • $100,000 in total pledged support from anchor donors ($10,000–$25,000), founding supporters ($1,000–$5,000), and community members (recurring donations).

We are actively seeking these pledges and anticipate having a committed funding base before launching the first grant round in September 2025.

The Cost of Inaction

The counterfactual to this project is: nothing happens. This means less funding will be flowing into AI Safety, fewer alignment researchers will receive a salary, fewer new safety orgs will be created, and fewer prospective researchers will be trained and onboarded.

Time is not on our side. The safety community needs more scalable and responsive funding infrastructure. We're here to build it.

We recognize this initiative's critical, time-sensitive need. We invite you to join us. Our existence directly translates to increased funding, better compensated researchers, the creation of new safety organizations, and the training of future talent, all vital for a positive AI future.

Let's ensure the future of AI is safe, sane, and human-aligned.

Help us fund it.

Contact: contact@ai-safety-fund.org 

Interested in getting involved? Fill out our interest form: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform 

Want to wait until we get 501(c)(3) status to donate? You can still pledge a donation here: https://docs.google.com/forms/d/e/1FAIpQLSezw-3GFKHNB-1n4j4EBLie2EPt5wFUy68OrgASHM90qMRO5A/viewform 

More details about the structure and plans for the fund: https://docs.google.com/document/d/12a9-WzdH_IDmaTYmVtk5n16syraKpLGlMdk9EriEMcY/edit?usp=sharing  

Website: https://www.ai-safety-fund.org/ 


Comments13Offers1Similar8
MarcusAbramovitch avatar

Marcus Abramovitch

1 day ago

Since I've already communicated my thoughts privately.

  1. Can you give any example of gaps in the funding ecosystem you are planning to solve that aren't covered by a mix of OP, SFF, LTFF, Jueyan's AISTOF, Manifund and others?

  2. What are some critical, time-sensitive or otherwise important work that is currently not being funded since kts being neglected by current donors?

  3. Who are currently donors you are working with that would donate to ai safety but aren't?

offering $100
JaesonB avatar

Jaeson Booker

about 21 hours ago

1 & 2: The only broad survey from those in the field, as cited in the project summary, lists lack of funds as a critical bottleneck. On top of that, there is a long history of unexpected and sudden drops in funding, severe delays in decision timelines, and opaque decision criteria. I won’t go into all of the ones I know of, but here are some examples: AI Safety Camp struggled to get funding (1), despite many in the community viewing them as highly impactful (2), and almost shut down as a result. Lighthaven, also viewed as highly impactful (3), struggled to get funding for years, despite also being regarded by many as very impactful (3). Most recently, Apart Research who, despite outperforming on their previous grant from LTFF (4), was turned down because LTFF is funding constrained, and OpenPhil did not respond in the expected timeframe (5). Regardless of how you feel about Apart’s impact, the reason they were not funded was not because they were judged to not be impactful, but because of a decrease in funds available. Goodventures has caused OpenPhil not to fund certain impact areas (6), despite some thinking they are critically important (7). LTFF has also been known for being extremely capacity constrained (8). There’s also the problem of just how much of the funds flow from very few sources, resulting in single points of failure, which can result in chaotic outcomes, such as the collapse of FTX Future Fund and the sudden decisions made by Good Ventures (9). It has likely also resulted in the conforming of ideas to suit only the world models of a few, most likely suffocating alternative ones (10). I have spoken to many who have had similar situations, where the problem did not seem to be the lack of a project’s promise or skill of the grantee, but sudden shifts in funding and a lack of clearly-communicated timelines to hear back. I have also spoken with individual researchers who had to leverage their own network and time to get promising research funded From others I have spoken with, this has also resulted in people leaving the AI Safety space altogether and working instead on capabilities research. I think the indirect cost is harder to measure, but probably much greater. Many talented people might care about AI going well, but their threshold for sacrifice might be lower than the one demanded of them currently in the community. They want a reliable community with easy channels to get involved, with dependable funding. I’m not going to pretend I can solve all of these issues, but I think the problem is there and this is a start in a better direction.

3: I think there is too much “whale hunting”. As I said, I think high-leverage donors are useful, and am fine with others continuing to pursue them, but they also carry with them the risks mentioned before. Namely, single points of failure, which results in funding shocks felt around the ecosystem, and conformity to world models held by the donors. I’m aiming more for sub-billionaires. I think there’s potential for wealthy individuals who are not very connected to the AI Safety space, but who are already concerned, and also grassroots campaigns for a more dispersed fundraising approach. I think the latter could be very important in the coming years, if AI continues to improve and gain more attention. By 2027, the funding landscape could scarcely resemble the current one, and I think setting-up funds now ready to capitalize on that will be important. I think what projects need to be funded, and the number of people capable of executing those projects, might also change. Even if you think most useful projects are being funded today doesn’t mean there won’t be a much higher range of useful projects tomorrow. 

1: https://www.lesswrong.com/posts/EAZjXKNN2vgoJGF9Y/this-might-be-the-last-ai-safety-camp 

2: https://thezvi.substack.com/p/the-big-nonprofits-post?open=false#%C2%A7ai-safety-camp 

3: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc/the-lightcone-is-nothing-without-its-people 

4: https://forum.effectivealtruism.org/posts/x5R4mpJRqPwpQAPqv/why-is-apart-research-suddenly-in-dire-need-of-funding 

5: https://forum.effectivealtruism.org/posts/x5R4mpJRqPwpQAPqv/why-is-apart-research-suddenly-in-dire-need-of-funding 

6: https://www.goodventures.org/blog/an-update-from-good-ventures/ 

7: https://www.youtube.com/watch?v=uD37AKRx2fg&t=4965s 

8: https://forum.effectivealtruism.org/posts/ee8Pamunhqabucwjq/long-term-future-fund-ask-us-anything-september-2023?commentId=NvuGEcKFLQrioBuRH 

9: https://docs.google.com/document/d/1EYCMHa6_7Mudb4s1MDvppGMY5BmHEVvryGw9cX_dlQ8/edit?tab=t.0 

10: https://www.lesswrong.com/posts/FdHRkGziQviJ3t8rQ/discussion-about-ais-funding-fb-transcript 

offering $100
JaesonB avatar

Jaeson Booker

about 21 hours ago

Regarding Jueyan's AISTOF, I'm not as familiar with it, so I can't speak on how effective it is, or what gaps it may be filling. Of current funds, I'm currently most optimistic about Longview.

AntonMakiievskyi avatar

Anton Makiievskyi

2 days ago

Why not just be Manifund’s regrantor? Everything is already set-up. Manifund was explicitly looking for more re-grantors

offering $100
JaesonB avatar

Jaeson Booker

2 days ago

@AntonMakiievskyi I'm open to the idea. My current mentality is that Manifund is not very scalable, at least not for the sort of thing I'm trying to do. I don't think they're trying to fundraise from the people I'm looking to fundraise from.

🍉

Chris Leong

1 day ago

@JaesonB Might be worth applying if you run into challenges. Could be a decent way to test fit/build credibility.

NeelNanda avatar

Neel Nanda

5 days ago

I'd be curious to hear more on why you think donors should give money to you rather than directly to AI safety organisations, or to other regranters like the Long-Term Future Fund. For example, do you have much of a prior grant-making track record or otherwise evidence of better decision-making than donors might have? Or is there a specific market inefficiency other funders are neglecting that you have a plan to solve?

offering $100
JaesonB avatar

Jaeson Booker

5 days ago

@NeelNanda Hi, I think it shouldn't be thought of as predicting our fund will have better decision-making (although there are other, higher-profile grant advisors who are interested in getting involved should we get more funding). I think it's more of betting that we can 10x the amount donated now, by obtaining fiscal sponsorship and the operational capacity to start the fund, and then start fundraising outside of normal EA circles. I don't think LTFF can do this, since they're focused on longtermism (which doesn't interest most people), and they also appear to already be capacity-constrained. You cannot easily donate directly to organizations like OpenPhil. I think too many EAs are focused on the orchard they spent years cultivating, and have forsaken any real attempts to go into the forest to forage. My guestimation is there is 10x more funding potential from people who are growing in concern about AI, but lack the knowledge or any easy channel for action. I think it is 10x right now. If AI continues to progress, which I expect it will, this could easily grow to 100x or even 1000x. I don't think it's crazy to think, if setup early, there could be billions flowing into AI Safety in 2027. But to get there, things need to be setup first, and early. This means getting 501(c)(3) status and building an initial track record, so that we can build trust with people outside and they can know that they're money will be well spent.

RyanKidd avatar

Ryan Kidd

about 18 hours ago

@JaesonB, what about the AI Risk Mitigation Fund? They're not focused exclusively on longtermism and have the same grantmakers as the LTFF.

offering $100
JaesonB avatar

Jaeson Booker

about 1 hour ago

@RyanKidd To my knowledge, they're still setting up or determining their next steps for the fund. Hopefully, it goes well, but I fear similar capacity constraints to LTFF.

RyanKidd avatar

Ryan Kidd

15 minutes ago

@JaesonB, I also fear those capacity constraints. I'm curious why you think the solution is a separate fund, rather than alleviating the capacity constraints of an already-proven fund? Additionally, why wouldn't exactly the same constraints (e.g., money, competent grantmakers) bottleneck your fund?

RyanKidd avatar

Ryan Kidd

8 minutes ago

I think there's a gap for an organization like The Life You Can Save for AI safety, which would encourage donations and pledges to top charities, but I don't really see how your proposed fund is a better alternative to scaling the AI Risk Mitigation Fund or Manifund. If the argument is "we bring in additional donors because of our non-longtermist affiliation", the same could be said for ARM and Manifund. If the argument is "we add additional grantmaker capacity" I would counter "why can't these same grantmakers join ARM or just use Manifund?" (Possible answer: maybe ARM's bar is too high and Manifund isn't an attractive target for donations.) Basically, I think we do need more funders in the medium-long term, but I think the experience and reputation of the ARM grantmakers is much higher than your proposed grantmakers, on average, and Manifund already exists as a short-term regranting solution that I would rather grow. I don't want this project to not happen at all, to be clear, but I would rather it be rescoped as something closer to TLYCS, as there is a much larger gap here than for another funder constrained by the same things as existing funders, albeit less experienced.

RyanKidd avatar

Ryan Kidd

1 minute ago

@JaesonB, if I was in your shoes, I would prioritize these things in order:

  1. Talk to ARM and ask them how they are capacity constrained, then help alleviate that constraint.

  2. If ARM is constrained by hiring great grantmakers, help them build a great hiring pipeline.

  3. If ARM is constrained by funding, build a TLYCS for AI safety, to encourage mass donations to ARM.

  4. If ARM is constrained by unfixable factors (e.g., no-one is driving it and they refuse help), first try cutting your teeth via regranting on Manifund. If this goes well, it doesn't seem crazy to set up another fund. Note that many early grantmakers for EA Funds also worked at Givewell or Open Philanthropy.