@RyanKidd Hi Ryan, thanks for your questions!
Concern about diversifying funding sources has been recently raised by Open Phil, AIM, the meta co-ordination forum, and Rethink Priorities, so clearly there’s a general need for this. The aim is to add value by taking a different approach to outreach:
Representing organizations rather than donors
Rather than analyzing nonprofits, this project puts the AI Safety orgs’ needs first and looks for the donors best placed to meet them.
Meeting funders with a wider variety of interests
EA donor outreach tends to follow an ideas-first approach, persuading donors to subscribe to a particular set of premises and conclusions about doing the most good. It takes a lot of work to get people on board, but you end up with a small number of extremely committed donors.
This is the opposite approach. We look for opportunities outside the EA/longtermist/rationalist sphere, and find the points of convergence with AI Safety. Meeting donors where they’re at in terms of interests is much easier than persuading them to subscribe to an entirely new worldview. Also this gives us access to large government grants and research funds, which tend to be more reliable than individual donors.
Explicitly focusing on infrastructure and communication
From initial scoping, many grants outside of EA are far more general about the cause area, and instead make judgements based on the veracity of the applicant- their financial history, auditability, experience, reputation in the field, and ability to plan and manage large grants. Organizations will need to adapt their strategies to communicate effectively with these different audiences and interests, so that’s where I can hopefully add a lot of value.
Diversifying research opportunities
Just as ACX and Manifund’s AI safety regranters have their own research interests, diversifying funders should foster work with a wider variety of research tastes.
From preliminary discussions, here’s some examples of what this work could look like:
Seeking women in tech funding to support Athena 2.0
Doing the legwork to help researchers to get stipends for short term projects, such as the MATS extension, Arena, Apart etc., supporting applications to many small funds and academic institutions.
Supporting a team to move to a mixed commercial model, thus massively increasing their funding pot for research (Lakera’s model)
Applications to academic grants which are outside the technical expertise of Open Phil e.g. bioinformatics research which could inform AI Safety, research about vulnerable users of technology eg children, the elderly
Seeking out art, theater, journalism, and education grants for public awareness campaigns around AI Safety, for Pause AI and others
Helping LISA to build towards recognised public research institution status in the UK, providing access to millions in UK government grants for scientific research
Looking at government funding to build out international partnerships e.g. US-India AI Safety and governance programs
Regarding how I’d improve Manifund, ACX regranting, GiveWiki etc, it would be great to widen the pool of both donors and interests. With the caveat that I’m a lawyer and love bureaucracy, I do think the lack of bureaucracy perpetuates uneven outcomes. There needs to be trust between donors and recipients. Without a formal application structure we rely on building trust through other fora. This becomes self-reinforcing and makes it difficult for international recipients and those outside the inner circle. So I’d like to see more space for international organizations to connect with potential funders.
I’m also very keen to see more women in the alignment space, there’s been great work on this front recently and I’m looking forward to supporting initiatives like Athena in future.
I hope this answers your questions, feel free to ask more or reach out!