Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Support the AI Policy Network for AGI preparedness lobbying

AI governance
jhs_aipn avatar

Jeffrey Starr

ProposalGrant
Closes July 12th, 2025
$0raised
$500minimum funding
$100,000funding goal

Offer to donate

18 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

What are this project's goals? How will you achieve them?

The AI Policy Network (AIPN) advocates for federal policies that prepare America for the emergence of AI systems on the path to AGI and beyond. Much of our focus is on reducing the risk of humanity losing control to these systems. We pursue our goals by (1) educating federal lawmakers on issues such as frontier AI development, AGI, and extreme risks posed by these systems; and (2) lobbying for policy guardrails such as datacenter security policies and transparency requirements. Success looks like developing bipartisan coalitions in the US House and Senate to implement AGI preparedness legislation and policies.

What makes AIPN different from other similar efforts?

Compared to most other US lobbying efforts in the space, we believe two factors in particular set AIPN apart:

  1. We focus more on AGI and loss-of-control risks (as opposed to issues such as “current harms”), and we talk directly about these issues with policymakers.

  2. We have achieved substantial access to influential policymakers (e.g., over the past year, we’ve held over two dozen private, 1-to-3-hour events – typically dinners – with key members of Congress to discuss our issues). Notably, our engagements have been well received on both sides of the aisle; around half of our engagements have been with each of the major parties, and we’ve found various members of each party to be receptive to our point of view (though obviously the level of reception has varied between individual members within each party).

We believe the above two factors are particularly important in tandem – we are able to have relatively frank discussions about AGI risk with important policymakers who are happy to engage with us. As one anecdote, our President of Government Affairs, Mark Beall, recently briefed the House Democratic Caucus about artificial superintelligence and related extreme risks, and his tweet about the event was retweeted by Representative Ted Lieu, Vice Chair of the House Democratic Caucus.

How will this funding be used?

We’re a 501(c)(4) seeking funding to cover lobbying activities. Donations will be used to expand our capacity to educate lawmakers and help fill AIPN’s funding gap for 2025. Roughly 65% covers core staff; ~20% sustains our outside lobbying firms; and ~15% comprises legal & compliance, travel, finance, operational infrastructure, and minimal DC and Bay Area office space. We’re aiming to raise $100k through Manifund. These funds are needed to help sustain our current staff and operations through the end of calendar year 2025 (in total, we need to raise at least $400k 501(c)(4) for the remainder of the year, though we are also applying for funding from other sources).

Who is on your team?

Daniel Colson (Executive Director)

Daniel leads AIPN. He separately is the founder and executive director of the AI Policy Institute (AIPI), a 501(c)(3) that (in addition to providing support for AIPN’s efforts) operates as a survey research firm focused on AI. AIPI's polling of American attitudes towards AI has been instrumental in getting policymakers to take AI risk more seriously. 

Mark Beall, Jr. (President of Government Affairs)

Mark directs our engagements with Congress and the Trump Administration. He previously served as the inaugural Pentagon AI Policy Director at the DOD Joint AI Center. Additionally, he has previously advised the American Security Fund, co-founded Gladstone AI, co-authored an influential report on frontier AI risks for the U.S. Department of State, and led AWS’ cloud program for the defense industry. 

Daniel Eth (Senior Research Fellow & Director of Content)

Daniel oversees production of our educational materials. Outside of AIPN, he researches the possibility of automated AI R&D leading to an intelligence explosion, and he recently co-authored a report on the subject. He previously performed AI governance research at Oxford. He additionally holds a PhD from UCLA, as well as a BS degree and an MS degree from Stanford, all in Materials Science & Engineering.

Jeff Starr (Chief Operating Officer & Director of Development)

Jeff leads our operations and fundraising. Previously, he co-founded Growth Accelerators, a tech go-to-market consultancy, and he founded, led, and sold a software company. He has over 20 years experience across 01Click, SAP, i2, and McKinsey.

Thomas Larsen (Head of Policy)

Thomas guides AIPN's policy and legislative strategy. He’s also a researcher at the AI Futures Project, where he co-authored AI 2027. Formerly, he co-founded the Center for AI Policy and conducted AI safety research at MIRI and the Stanford Existential Risks Initiative.

We have additionally retained outside firms to help in the following areas:

  • Vogel Group – for political and legislative strategy, facilitating connections with policymakers in Congress, and co-managing our lobbying engagements.

  • PEM Law – for legal and compliance processes, organization formation, labor law, and contracts.

  • GRF CPA & Advisors – for financial systems, processes, and IRS filings.

What are the most likely causes and outcomes if this project fails?
Causes: failure to close our funding gap, failure to effectively compete with corporate lobbying projects, or rapid partisan polarization that freezes our bipartisan strategy. 

Outcomes: If we fail to raise the necessary funds, we will need to substantially curtail our activities, which will decrease our ability to educate lawmakers on our issue. As we are one of the only groups that is both talking directly about AGI risks and that has received a warm reception from various senior policymakers in the U.S., across party lines, curtailing our activities would likely lead to many key policymakers not being exposed to a serious case for taking AGI seriously and for many of the policies that we think would help with the situation. 

Further, curtailing activities would force us to cede territory to corporate lobbyists and would increase the instances where industry narratives are uncontested. Legislation related to guardrails such as transparency measures for frontier model development would be less likely to pass and would be less likely to be targeted at challenges related to AGI if they do pass. If the USG fails to enact guardrails as we approach AGI and superintelligence, it’s more likely that humanity will lose control of the future to these advanced AI systems.

How much money have you raised in the last 12 months, and from where?

Organizational 501(c)(4) revenue: $629k already received plus $125k pledged – all from individual donors. Our total need for 2025 of 501(c)(4) funding is $1.17M ($412k is needed to close the gap).

CommentsOffersSimilar8

No comments yet. Sign in to create one!