Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
2

The Midas Project

AI governanceGlobal catastrophic risks
tylerjn avatar

Tyler Johnston

Not fundedGrant
$0raised

Writing this on behalf of The Midas Project. I am the founder and director and the only full-time employee. Previously, I worked at The Humane League and the Good Food Institute, and received a Bachelor's degree from Harvard College. Since The Midas Project got rolling this Summer, with less than $25,000 (and volunteer support), we've:

  • launched a campaign calling for an AI coding startup to conduct dangerous capability evaluations (I don't think this has yet been strong enough, although it did get some response from the company, namely the release of an acceptable use policy)

  • co-led a petition against OpenAI concerning the slow abandonment of their original safety and nonprofit mission.

  • cosigned a campaign calling for social media companies to limit the spread of political deepfakes

  • created an AI safety change monitoring platform

  • created a platform for digital activism concerning AI safety, with around 60 users.

... and as of this week, got approved as a 501c3 nonprofit in the US. This will hopefully unlock a lot more scaling for next year. Optimistically, we'd like to raise another $119,000. Right now, we only really have funding for the salary of the executive director (myself) and minimal programs. Extra funding would go toward, in order of importance, to:

  • Hiring a co-founder/program director

  • Hiring a full-time campaigner

  • Paid contracting for the website/digital platforms (so far, it's all homemade, and I fear it shows a little)

With as little as an additional $30,000, we could hire a second full-time employee, which I think would be the biggest unlock for me (being a solo founder is a bit challenging, and I know CE/YC/etc. report having another full-time cofounder as critical for success).

So far, our funding has mainly come from individual donors and (soon) SFF. We've received feedback that this is a challenging thing for some institutional funders to support since it's so adversarial (posing excess reputational and legal risk in particular). So small donations from individuals are particularly important for us.

I've never received specific constructive feedback from (potential) funders, so I can't share that, unfortunately. I suspect the strongest argument in favor of supporting us is that (1) this work has been successful in other movements, and ~nobody is doing it for AI-related catastrophic risks*, and (2) I have experience doing this in the animal movement. I also think it's been cost-effective, given what we've accomplished with limited funding.

The strongest argument against is probably the weak track record from our first campaign (more on this in the next paragraph) and potential downside risk if our work polarizes staff at AI companies, turns them off from safety concerns, etc.

We do not have a strong track record of success on our key goals (actually causing important changes to self-governance/risk evaluation practices at AI companies). I think that's because our reputation/programs aren't strong enough to move these companies yet. But since we are so new and minimally resourced, I feel like we've created a decent foundation, akin to what other nascent orgs have done with 10x our budget. Most of the value in contributing is probably speculative, i.e., what impact we could have in 2025 and beyond as we continue to grow. I think there are threshold effects to these campaigns, where you need a critical mass before companies take the reputational threat seriously. But at a certain point, without signs of clear impact, we'd consider spinning down or pivoting to adjacent issues/strategies.

If you want to discuss anything about our plans (feedback, ideas, questions, whatever), you can send me an email or book a call with me directly. Or, better yet, do so as a comment to this post so everyone can see it.

* Accountable Tech runs similar campaigns for near-term AI risks (we collaborated on a deepfake campaign), and Control AI has run similar advocacy campaigns but (in my opinion) is making strategic mistakes that are pointing corporate incentives the wrong way.

Comments2Similar8
Eko avatar

Marisa Nguyen Olson

Building an AI Accountability Hub

Case Study: Defending OpenAI's Nonprofit Mission

Technical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
2
$0 raised
kylegracey avatar

Kyle Gracey

International AI Safety & Ethics Network

Improve policy, communications and campaign coordination, and increase trust, among AI ethics & safety organizations, and provide capacity building to new orgs

AI governance
3
0
$0 raised
Thomas-Larsen avatar

Thomas Larsen

General Support for the Center for AI Policy

Help us fund 2-3 new employees to support our team

AI governance
9
5
$0 raised
Apart avatar

Apart Research

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline

Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors

Technical AI safetyAI governanceEA community
30
36
$131K raised
AlexandraBos avatar

Alexandra Bos

AI Safety Research Organization Incubator - Pilot Program

3
6
$16K raised
Miles avatar

Miles Tidmarsh

Synthetic pretraining to make AIs more compassionate

Enabling Compassion in Machine Learning (CaML) to develop methods and data to shift future AI values

Technical AI safetyAnimal welfare
3
0
$0 / $159K
AISGF avatar

AI Safety and Governance Fund

Testing and spreading messages to reduce AI x-risk

Educating the general public about AI and risks in most efficient ways and leveraging this to achieve good policy outcomes

AI governanceEA Community Choice
4
17
$12.6K raised
JBraunstein avatar

Jordan Braunstein

Overcoming inertial barriers to collective action through anonymous coordination

Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.

Science & technologyAI governanceEA Community ChoiceEA communityGlobal catastrophic risks
6
25
$822 raised