We partner with regrantors: experts in the field of AI safety, each given an independent budget. Regrantors recommend grants based on their personal expertise; Manifund reviews these recommendations and distributes the funds.
Our regranting program is inspired by the success of programs like the Future Fund's regrants, SFF's speculation grants, and Fast Grants.
Neel Nanda
“I think that understanding, detecting and potentially mitigating chain of thought unfaithfulness is a very important problem, especially with the rise of o1 models... I think Arthur is fairly good at supervising projects, and that under him Jett and Ivan have a decent shot of making progress, and that enabling this to start a month earlier is clearly a good idea.”
Leopold Aschenbrenner
“I think Epoch has done truly outstanding work on core trends in AI progress in the past few years. I'm also excited by their recent foray into benchmarking in the form of FrontierMath... Better benchmarks that help us forecast time to AGI (and especially time to relevant capabilities, such as automated AI research) and do so in a highly credible and scientific way are very valuable for informing policymakers and catalyzing important policy efforts.”
Adam Gleave
“I've generally been impressed by how well Timaeus have executed. They've in short order assembled a strong team who are collaborating & working well together, producing substantial research and outreach outputs. They have a distinctive research vision, and I think deserve some credit for popularizing studying the evolution of networks throughout training from an interpretability perspective with e.g. EleutherAI's interpretability team now pursuing their own "development interpretability" flavored research.”
Currently all grant information is made public. This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor’s writeup.
We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of a regrantor track records. We recognize that not all grants are suited for publishing; for now, we recommend such grants be made through other funders, such as the Long Term Future Fund, the Survival and Flourishing Fund, or Open Philanthropy.
What kinds of projects are eligible for regranting?We have no official cause-area restrictions on grants, though most of our regrantors are focused on mitigating global catastrophic risk, specifically on AI safety.
We support regrants to registered charities and individuals. For-profit organizations may also be eligible, pending due diligence. As a US-registered 501c3, we do not generally permit donations to political campaigns or lobbying.
We look over all grants before fulfilling withdrawal requests to make sure they meet these requirements. We reserve the right to veto grants for any reason, though we expect to often defer to our regrantors’ judgement.
Can regrantors send money to their own projects?In certain circumstances, we allow regrantors to give to projects they advise, or are otherwise involved with; we evaluate these projects with a more rigorous bar before fulfilling withdrawal requests. We generally do not permit regrantors to pay for their own salaries.
Can I contribute funds to the regrantor budgets?Yes! We're looking for contributions to our AI Safety regrantor budgets. If you're donating a substantial amount (eg $50k+), we can also work with you to nominate specific regrantors who share your values and interests. We do ask large donors to cover a 5% fiscal sponsorship fee, which offsets our operational costs & salaries.
Get in touch with Austin (austin@manifund.org) if you're interested in donating!