You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Modeling Cooperation builds software tools and conducts research to help AI governance researchers and policymakers understand the dynamics of AI competition. Our main project is the software platform behind the Intelligence Rising workshops: simulations used by senior figures in government, industry and academia to develop an intuitive grasp of how AI competition unfolds and what can be done about it.
We are seeking $26,000 in matched funding to unlock a committed grant from the Survival and Flourishing Fund, effectively doubling every dollar donated. We must raise at least $10,000 to unlock the match! To get your donation matched, please email us your Manifund transaction receipt and permission to be contacted by SFF for their due diligence purposes.
We are a small team making a unique contribution to AI governance. We have collaborated with leading researchers including Professor Robert Trager (co-director of AIGI, Oxford) and Dr. Shahar Avin (CSER, Cambridge & UK AISI).
The competition to build transformative AI is accelerating, with very little coordination on safety. Even if every major actor understood how to make their AI safer, competitive pressure can make cutting corners rational; a lab that slows down risks being overtaken by one that does not. The result is a collective action problem: individually sensible decisions add up to a collectively dangerous outcome.
Modeling Cooperation focuses on making the extraordinary risks clear to decision-makers. We build software tools and conduct research to make those trade-offs explicit to leaders, fellow researchers, and anyone who needs a more tangible understanding of the risks ahead. Most AI governance work is qualitative, relying on historical analogies and policy intuition. Most technical AI safety work focuses on alignment or interpretability.
No one else is approaching the problem from our standpoint: building scalable software to help decision-makers experience the competition dynamics that compromise the safety of frontier AI systems, and the unprecedented coordination required to increase the likelihood of good outcomes.
Our work falls into three areas:
Research software for decision-makers.
Our main project is the software platform behind the Intelligence Rising workshops, run by Dr. Shahar Avin and his team at Technology Strategy Roleplay. These workshops use a structured simulation game to help senior figures in governments, industry and academia develop an intuitive grasp of how AI competition unfolds and what can be done about it. Around 1,000 participants have taken part over five years. Over the past two years, our software has made those sessions easier to run and scale by simplifying and automating workshop facilitation, and making it much easier to train and onboard new facilitators. We also build and maintain several interactive tools that let researchers and policymakers explore AI competition models directly, including the Safety-Performance Tradeoff web app developed with Professor Robert Trager.
Original research on policy and competition dynamics.
We have a track record of novel research into AI policy. Our report “Safe Transformative AI via a Windfall Clause” shows how a Windfall Clause can be used to de-escalate dangerous competition dynamics even when the situation is uncertain and there is incomplete information. We also contributed to the Safety-Performance Tradeoff model, which demonstrates how technical progress in AI safety is not always sufficient to reduce risk absent effective AI governance. Our upcoming research assesses the consequences of revealing secret information about AI risks and how to make such disclosures of early warnings more credible.
Open tools for other researchers.
We document and share our models so that other governance researchers can build on them. Our repositories are accessible to other researchers, our simulation runs are reproducible, and our tools have been used by PhD researchers studying AI competition.
The Survival and Flourishing Fund has committed a matched grant to Modeling Cooperation. Donations are currently matched 1:1 up to $26,000, with a minimum requirement of $10,000.
Full-year operating costs at our current scale come to approximately $90,000, covering one lead research engineer, one part-time developer, and one part-time operations lead. The matched $52,000 funds the team for seven months.
During this period, the team will maintain and extend the Intelligence Rising web application, which is used by the Technology Strategy Roleplay (TSR) to run and scale workshops for decision-makers on the risks of unfettered AI competition. The main focus will be improving the reliability, usability, and core game mechanics of the platform so that workshops can be run more easily and new facilitators can be trained more quickly.
This work will include:
Maintenance and support
Maintaining and hosting the Intelligence Rising web application.
Providing ongoing technical support and configuration adaptations in response to TSR facilitator requests.
Managing a shared, prioritised backlog of feature requests developed collaboratively with TSR.
Improvements and new features
Continuing to standardise and automate workshop facilitation through the software, reducing the operational burden per workshop.
Implementing the highest-value features from the backlog, including:
Dynamic team creation during the workshop game.
Additional dice mechanics for the dice roller.
Improvements to how policies affect the game state.
Facilitator onboarding improvements through the web tour and interface updates.
These improvements will reduce the operational burden per workshop and allow TSR to run more workshops with the same staff capacity, which is the main mechanism by which the project scales its impact.
See the SFF letter of intent here.
Modeling Cooperation is an independent group of software engineers and researchers who began collaborating in 2019 following participation in the AI Safety Camp. We are fiscally sponsored by Convergence Analysis. Over six years, we have received more than $400,000 in grants from the Survival and Flourishing Fund, the Foresight Institute, and the EAF Fund.
Jonas Emanuel Müller leads our work on AI competition dynamics and has over two decades of experience as a software engineer. He previously chaired the board of Animal Charity Evaluators and spent several years as a software engineer in finance.
Paolo Bova is a PhD candidate in computer science at Teesside University, where his thesis examines how early warning systems can de-escalate AI competition dynamics. He holds a bachelor’s in economics from Trinity College Cambridge.
Tanja Rüegg manages funding reporting and communications, alongside a chief operating officer role at a data analytics firm. She has also supported other effectiveness-focused organizations (e.g. Sentience Politics, GBS Schweiz) as a volunteer, staff, and board member.
Our research network includes a DPhil candidate in international relations at the AI Governance Initiative at the University of Oxford, and leading researchers including Professor Robert Trager and Dr. Shahar Avin (CSER, Cambridge & UK AISI).
If this campaign does not reach its goal, the matched SFF funding goes unrealised, and the team's capacity to continue this work beyond the near term is uncertain. The Intelligence Rising workshops would lose the development support for their primary software. Ongoing research into AI competition dynamics (a neglected area with few other quantitative researchers working on it) would be paused or wound down.
The field of AI governance lacks quantitative research for understanding strategic dynamics and software for making these dynamics visible to people who matter. Modeling Cooperation is one of very few teams working on quantitative research in AI competition and building the software tools needed to make the risks clear to decision-makers.