You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Democracies need strategic clarity on how to prevail in the age of advanced AI. They must navigate a matrix of threat actors and attack vectors while fostering AI innovation. Further complexity arises when interventions have downstream implications (e.g. international cooperation requires research on verification mechanisms), and/or tensions (e.g. nationalizing frontier labs may secure democratic control, but increase risks of authoritarian slide).
For democracies to maintain their lead in AI development without compromising democratic values, they need clarity on what choices and trade-offs lie before them.
We present the AI Readiness Objectives (AROs) as a guide for governmental AGI preparedness. We are seeking $250k in funding in order to:
Map expert recommendations onto a single intuitive dashboard across interventions such as access controls, model security, international cooperation, and institutional resilience.
Publish case studies clarifying preparedness gaps in the US & EU.Create decision support tools that show how these interventions interact and depend on each other. (illustrative mock-up by Claude)
Government representatives and high-impact stakeholders have requested project deliverables, including US state staffers; ARI federal advocates building AI policy 'go bags'; DoD staff preparing a possible AGI risk briefing for the House Armed Services Committee; and Dutch & Korean AISI officials for goal prioritization. We have leveraged ~40 expert consultations to secure these engagements, with a second consultation round planned for April.
Our goal is to give the AI safety ecosystem a shared understanding of what AGI readiness requires, so actors can allocate scarce resources more strategically and avoid business-as-usual governance. We have already developed a prototype of 9 objectives through 30+ expert consultations across government, industry, civil society, and academia.
With this funding, we will:
Run a second expert validation round for the 9 AROs, including feedback from 20 additional experts and an in-person workshop;
Publish case studies assessing the EU AI Act and US AI Action Plan against the AROs, identifying which objectives are covered, which interventions are being used, and where critical gaps remain; and
Build and launch an online interdependency tool that maps synergies, conflicts, dependencies, and contingency scenarios across AI safety interventions.
We will judge success by whether the outputs are used: case studies cited in policy proposals, tools used by grantmakers, tools informing strategy in 3+ AISIs or equivalent agencies, and 80% of consulted experts endorsing the final objectives as complete and coherent.
This funding will be used to complete and launch the AROs project: run the second expert consultation round, host the workshop, produce the US and EU case studies and short summaries, and build the online interdependency tool. It will also help extend implementation beyond the current baseline by strengthening expert consensus and accelerating international uptake. The attached materials describe this as the work needed to complete the project and move from the current proposal and prototype stage into public-facing tools and reports.
The AROs project is lead by researcher Gwyn Glasser, supported by Dr. Elliot McKernon, and David Kristofferson, and advised by Justin Bullock (ARI), Dewi Erwan (Bluedot Impact), and Alexander Saeri (MIT AI Risk Initiative).
Convergence Analysis has a strong track record of policy-relevant work. Our AI Model Registries report was cited by the Paris AI Action Summit and informed the US Bureau of Industry and Security. Its recommendations were incorporated into the EU’s GPAI Code of Practice. The organization also established the AI Scenarios Network, a coalition of roughly 60 researchers, and lead researchers have experience developing endorsed harm-quantification frameworks for the CyberPeace Institute. Gwyn Glasser recently led policy advocacy for the EU GPAI Code of Practice, where his recommendations were adopted into the final code, and Elliot McKernon led foundational work including AI Model Registries: A Foundational Tool for AI Governance.
The most likely failure mode is straightforward: without funding, the project stalls before the visualization and interdependency tool are developed, and the broader package of validation, case studies, and public-facing decision tools is not completed. Our current runway for this project expires in May 2026, so failure would likely mean pausing the project indefinitely, before it can deliver its most actionable outputs.
If that happens, policymakers, AISIs, grantmakers, and researchers are less likely to get a shared framework for AGI readiness, and the field is more likely to continue with fragmented priorities and weaker coordination. In the project’s own framing, that raises the risk of “business-as-usual” governance at a time when short timelines make strategic clarity especially important.
Over the last year, this project received seed funding from philanthropic donors and a Survival and Flourishing Fund speculation grant, covering approximately one researcher at FTE for a year.