@RyanKidd I would be interested in regranting for Manifund, but do fear that it might steer me toward looking good in the eyes of the other regranters, and away from funding what might actually be most needed. I am open to it, though.
@JaesonB
AI Safety Fund - Founder
https://www.linkedin.com/in/jaeson-booker$100 in pending offers
Previous software engineer, AI researcher, and startup founder.
Jaeson Booker
2 days ago
@RyanKidd I would be interested in regranting for Manifund, but do fear that it might steer me toward looking good in the eyes of the other regranters, and away from funding what might actually be most needed. I am open to it, though.
Jaeson Booker
2 days ago
@RyanKidd I've actually already reached out to ARM and they were encouraging of the idea of creating new funds for AI Safety, separate from ARM. I had also mentioned a collaboration of some kind, but they said they are focused on figuring out the strategy for the fund in the coming months, and to reach out again at that point.
Jaeson Booker
2 days ago
@RyanKidd To my knowledge, they're still setting up or determining their next steps for the fund. Hopefully, it goes well, but I fear similar capacity constraints to LTFF.
Jaeson Booker
3 days ago
Regarding Jueyan's AISTOF, I'm not as familiar with it, so I can't speak on how effective it is, or what gaps it may be filling. Of current funds, I'm currently most optimistic about Longview.
Jaeson Booker
3 days ago
1 & 2: The only broad survey from those in the field, as cited in the project summary, lists lack of funds as a critical bottleneck. On top of that, there is a long history of unexpected and sudden drops in funding, severe delays in decision timelines, and opaque decision criteria. I won’t go into all of the ones I know of, but here are some examples: AI Safety Camp struggled to get funding (1), despite many in the community viewing them as highly impactful (2), and almost shut down as a result. Lighthaven, also viewed as highly impactful (3), struggled to get funding for years, despite also being regarded by many as very impactful (3). Most recently, Apart Research who, despite outperforming on their previous grant from LTFF (4), was turned down because LTFF is funding constrained, and OpenPhil did not respond in the expected timeframe (5). Regardless of how you feel about Apart’s impact, the reason they were not funded was not because they were judged to not be impactful, but because of a decrease in funds available. Goodventures has caused OpenPhil not to fund certain impact areas (6), despite some thinking they are critically important (7). LTFF has also been known for being extremely capacity constrained (8). There’s also the problem of just how much of the funds flow from very few sources, resulting in single points of failure, which can result in chaotic outcomes, such as the collapse of FTX Future Fund and the sudden decisions made by Good Ventures (9). It has likely also resulted in the conforming of ideas to suit only the world models of a few, most likely suffocating alternative ones (10). I have spoken to many who have had similar situations, where the problem did not seem to be the lack of a project’s promise or skill of the grantee, but sudden shifts in funding and a lack of clearly-communicated timelines to hear back. I have also spoken with individual researchers who had to leverage their own network and time to get promising research funded From others I have spoken with, this has also resulted in people leaving the AI Safety space altogether and working instead on capabilities research. I think the indirect cost is harder to measure, but probably much greater. Many talented people might care about AI going well, but their threshold for sacrifice might be lower than the one demanded of them currently in the community. They want a reliable community with easy channels to get involved, with dependable funding. I’m not going to pretend I can solve all of these issues, but I think the problem is there and this is a start in a better direction.
3: I think there is too much “whale hunting”. As I said, I think high-leverage donors are useful, and am fine with others continuing to pursue them, but they also carry with them the risks mentioned before. Namely, single points of failure, which results in funding shocks felt around the ecosystem, and conformity to world models held by the donors. I’m aiming more for sub-billionaires. I think there’s potential for wealthy individuals who are not very connected to the AI Safety space, but who are already concerned, and also grassroots campaigns for a more dispersed fundraising approach. I think the latter could be very important in the coming years, if AI continues to improve and gain more attention. By 2027, the funding landscape could scarcely resemble the current one, and I think setting-up funds now ready to capitalize on that will be important. I think what projects need to be funded, and the number of people capable of executing those projects, might also change. Even if you think most useful projects are being funded today doesn’t mean there won’t be a much higher range of useful projects tomorrow.
1: https://www.lesswrong.com/posts/EAZjXKNN2vgoJGF9Y/this-might-be-the-last-ai-safety-camp
2: https://thezvi.substack.com/p/the-big-nonprofits-post?open=false#%C2%A7ai-safety-camp
3: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc/the-lightcone-is-nothing-without-its-people
6: https://www.goodventures.org/blog/an-update-from-good-ventures/
7: https://www.youtube.com/watch?v=uD37AKRx2fg&t=4965s
9: https://docs.google.com/document/d/1EYCMHa6_7Mudb4s1MDvppGMY5BmHEVvryGw9cX_dlQ8/edit?tab=t.0
10: https://www.lesswrong.com/posts/FdHRkGziQviJ3t8rQ/discussion-about-ais-funding-fb-transcript
Jaeson Booker
4 days ago
@AntonMakiievskyi I'm open to the idea. My current mentality is that Manifund is not very scalable, at least not for the sort of thing I'm trying to do. I don't think they're trying to fundraise from the people I'm looking to fundraise from.
Jaeson Booker
7 days ago
@NeelNanda Hi, I think it shouldn't be thought of as predicting our fund will have better decision-making (although there are other, higher-profile grant advisors who are interested in getting involved should we get more funding). I think it's more of betting that we can 10x the amount donated now, by obtaining fiscal sponsorship and the operational capacity to start the fund, and then start fundraising outside of normal EA circles. I don't think LTFF can do this, since they're focused on longtermism (which doesn't interest most people), and they also appear to already be capacity-constrained. You cannot easily donate directly to organizations like OpenPhil. I think too many EAs are focused on the orchard they spent years cultivating, and have forsaken any real attempts to go into the forest to forage. My guestimation is there is 10x more funding potential from people who are growing in concern about AI, but lack the knowledge or any easy channel for action. I think it is 10x right now. If AI continues to progress, which I expect it will, this could easily grow to 100x or even 1000x. I don't think it's crazy to think, if setup early, there could be billions flowing into AI Safety in 2027. But to get there, things need to be setup first, and early. This means getting 501(c)(3) status and building an initial track record, so that we can build trust with people outside and they can know that they're money will be well spent.
Jaeson Booker
over 1 year ago
@Chris Leong Strategy is a synthesis of ideas and finding the right people to work with to create useful projects and plans. I doubt I can do much to synthesize a strategy org without this level of in-person collaboration.
For | Date | Type | Amount |
---|---|---|---|
Manifund Bank | 7 days ago | deposit | +100 |
AI-Plans.com Critique-a-Thon $500 Prize Fund Proposal | almost 2 years ago | project donation | 500 |
Manifund Bank | almost 2 years ago | deposit | +500 |