You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
CAIP is seeking funding to renew our advocacy budget for 2025 that will help us continue our efforts to persuade Congress to pass AI safety legislation.
The Center for AI Policy (CAIP) is a growing 501(c)4 non-profit organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. CAIP’s primary goal is to persuade Congress to pass AI safety legislation. We focus on advocating for U.S. federal regulatory action because we believe this is the only reliable way to limit American developers’ ability to train and deploy dangerously advanced AI and to adequately protect the public from catastrophic risk.
Our advocacy efforts aim to influence the content of key AI safety bills and enhance the likelihood that those bills will be passed in Congress. CAIP develops the resources and provides the advice that Congress needs to act on AI safety; we have an expert team to help convince them to do so. Our resources include concrete policy proposals, recommendations, and draft model legislation ready for Congress to use. Through one-on-one meetings, panel briefings, and happy hours, we socialize those policies and provide expertise to Congressional offices on a regular basis. We’re also actively building grassroots bases across the country to increase our influence on AI safety legislation from the bottom up.
To achieve our advocacy goals, we plan to do the following in 2025:
Meet regularly with Congressional offices to help them understand the potentially catastrophic risks of AI, socialize policy ideas, and advise them on how to improve drafts of AI safety legislation from a public interest perspective.
Support Congressional offices by offering feedback on legislative proposals and endorsing bills focused on AI safety.
Meet and collaborate with other advocacy groups, think tanks, AI researchers, industry and other interest groups to build consensus on and collectively push for the best AI safety policies.
Mobilize grassroots support through campaigns (rallies, letter writing, phone banking) and small events with stakeholders in key districts across the country.
We understand that it may take Congress a long time to enact AI safety legislation, but we don’t know when the next AI crisis is coming. It could be right around the corner, and when it comes, policymakers will need to act quickly. The more that we can do now to prepare Congress, the better equipped they will be to pass responsible AI legislation. CAIP exists to fill this need. We are helping Congress understand the problem of catastrophic AI risks and providing trusted advice and concrete policies to ensure safe AI.
If you would like more information about CAIP’s work and our funding needs, please contact development@aipolicy.us.
This funding will be used to cover the salaries and benefits of our policy and advocacy experts who are lobbying Congressional offices and building our grassroots bases and coalition partners across the United States, as well as various program costs. Salaries make up most of our direct costs because our in-house experts are the primary implementers of our advocacy activities.
Over the last year, CAIP hired an expert team that is vital to our ability to develop accurate policy recommendations, effectively influence policymakers, and leverage coalitions and grassroots networks. Our advocacy team consists of our Executive Director Jason Green-Lowe, Government Affairs Director Kate Forscey, Government Relations Director Brian Waldrip, External Affairs Director Mark Reddish, and National Advocacy Coordinator Iván Torres.
Our advocacy track record as a new AI safety organization is strong. CAIP’s biggest accomplishment so far has been the drafting and publication of our model AI safety legislation that addresses the need for data center monitoring, third-party safety evaluations, and liability reform. We have been invited by a number of Congressional offices to discuss and suggest edits to AI safety bills. We were successful in adding line edits to AI bills or persuading Congressional offices to improve drafts of AI bills. CAIP also endorsed specific AI legislation and Congressional candidates in the 2024 elections and organized two open letters with 16 signatories to urge policymakers to pass responsible AI legislation.
One potential cause of project failure is a lack of funding. We have secured some funds that could get us through the first quarter of 2025, but we need more support to keep our advocacy work going. There is also a small risk of political gridlock. Our core team has delivered hundreds of policy papers, recommendations, and events that have built CAIP’s reputation as a leading source of AI safety and governance expertise. As a result, we are well-positioned to take advantage of any movement Congress makes on AI safety. Unified Republican control of the federal government makes this movement more likely, but we cannot guarantee that Congress will act next year.
Our primary funders are mid-level and major donors who are concerned about AI risks. We also recently received funding from Survival and Flourishing Fund. With a new development director in place, we are currently working on a comprehensive development strategy to increase and diversify our funding sources over the next year.
There are no bids on this project.