IMPETUS
AI without limits is a ticking time bomb that PauseAI is working to neutralize. The task should not rest upon our shoulders alone. We need to bring our politicians into the fold, to alert them to the existential threats that superintelligent AI poses, and then to advocate for effective and timely risk-mitigation laws.
BASIC IDEA
PauseAI proposes to give Gov’t Action Kits to as many politicians as funding will allow, and then to add thought-leadership, resources, and educational outreach into the mix.
We’ll begin with the US Federal Government; giving one or more kits to every Representative, Senator, and Governor, plus the President, Vice President and Cabinet Ministers, too; some 600 recipients in all.
In a related effort, we’ll give 200 kits to major news outlets, along with a media-kit with specialized content of interest to journalists.
As a stretch goal (should funding allow), we’ll expand the recipient list to include state government officials plus staffers (at both the state and federal level), and anyone else in government who asks.
To be perfectly clear, there should be no expectation of this effort directly leading to legislative or administrative action around our core claim: that superintelligent AI is an existential threat that our politicians must work to mitigate. Our more modest goal is to inform, educate, and support our leaders, and to give them the tools, data, thought leadership, and social proof they’ll need to make headway on safeguarding us all.
It is likewise important to note that while this effort is focused on the US (even though the larger PauseAI effort is international), the impetus for narrowing our scope to the US alone is entirely due to practicalities. The companies putting mankind at risk are largely US-based—Open AI, Anthropic, Google, Microsoft, NVIDIA, etc.—so it follows that US efforts will be needed if we are to effect any change.
In a more general sense, by limiting our fundraising goals to $50K rather than $100K or even $1MM, we increase the likelihood of getting funded in a timely manner, thereby improving our ability to act expeditiously.
PAUSEAI
PauseAI is the world’s foremost AI-safety community, dedicated to the proposition that superintelligent AI poses an existential risk to humanity, and that the only sensible response to such a situation is to pause the development of frontier-grade AI. See https://pauseai.info/proposal for a set of concrete steps that humanity might undertake to make us all safer in the face of AI-borne risks (https://pauseai.info/risks).
LEADERSHIP
Louis Berman (Project Lead): Louis left his Chief Technologist role at EPAM (an industry-leading Pennsylvania-based 50K+ employee programming consultancy), to have more fun slinging code at SquidEyes (a currency-trading startup he co-founded in 2022). As a PauseAI volunteer, Louis is working to mitigate the risk of superintelligent AI. An avid astronomer, Mr. Berman led the very first visual observation of Eris—the dwarf planet that got Pluto demoted. A happily expatriate New Yorker, Mr. Berman lives on the outskirts of Philadelphia with his lovely and talented wife, set and costume designer Marie Anne Chiment.
Joep Meindertsma (Exec. Oversight): Joep is the CEO of Ontola.io, a software development firm from the Netherlands that aims to give people more control over their data. Joep founded PauseAI and actively lobbies for slowing down AI development.
Holly Elmore (Exec. Sponsor): Holly has a Ph.D. in Organismic & Evolutionary Biology from Harvard University, where she also led the Effective Altruism student group. She subsequently worked as a researcher at a think tank on the issue of whether the lives of wild animals could be improved by humans before shifting her role and cause area focus to found PauseAI US, where she serves as Executive Director. Along with PauseAI Global, she is working to bring about an indefinite, global pause on frontier AI development — until we can be confident that the technology is safe for humanity. You can read some of her writing here, and listen to the Future of Life Institute podcast episode she features in here.
John Sherman (Content Development): John is an entrepreneur and former journalist living in Baltimore. As an investigative local TV news reporter from 1999-2010, John won every major national journalism award, including a Peabody Award, a duPont-Columbia Award (the electronic media version of the Pulitzer Prize), a National Emmy Award, a National Edward R. Murrow Award, among others. In 2010 John left news to start Storyfarm, a now 15-year-old Addy- and Emmy-winning creative video agency whose clients include Under Armour, SAP, Uber, Match Group, T. Rowe Price, Starbucks, Mastercard, and many more. In March 2023, John read Eliezer Yudkowsky's AI safety article in Time Magazine online, and his life was forever changed. The son of a nuclear arms-control negotiator, John grew up to believe our most serious problems are solvable. So he put his communications skills to use, launching For Humanity: An AI Safety Podcast in November 2023.
Felix De Simone (Organizing Director): Felix got his start in environmentalism, organizing around the risks of climate change and ecosystem loss. He has led grassroots campaigns in Michigan, Massachusetts, and California on causes from ocean conservation to pollinator protection. His work has seen him organize groups of 40+ volunteers, plan and coordinate events, build coalitions of support from academics, community groups, and local officials, and meet with Congressional offices on crucial policies. His advocacy efforts have contributed to the passage of aggressive clean-energy legislation in Massachusetts and restrictions on bee-killing pesticides in California. Felix has since plunged into the world of AI safety, having joined PauseAI after recognizing the possibility of a cataclysmic AI outcome. Felix is also an aspiring science-fiction author and is working on his first novel, depicting a flourishing human future in a world where AGI has been paused.
“THE KIT”
Introductory “Call-To-Action” letter
Paperback copy of Darren McKee’s “Uncontrollable” book
Legislative Ideas booklet
Oversized “Quick Facts” postcard:
Gov’t Action Elevator Pitch (25 words or less)
URL and QR-code to a landing page on the PauseAI site, with:
Intro to an introductory PauseAI video
Gov’t Action Kit video “selling” our core message
AI-Safety Learning Path (wiki)
Links with brief writeup to allied organizations, such as CAIS
Top 10 list of AI-safety facts
Top 3 quote from PauseAI site
Contact info for our Gov’t Action Support Team
Swag
An overriding theme of the Gov’t Action Kit is that it must be personalized, beginning with the fact it will be mailed to each recipient from a volunteer in their constituency. The letter and sticky-note must be personalized and hand signed,too.
HUMAN
The kit will deliver value on its own, but if our efforts are to be met with true and lasting success then we’ll also need to add the human touch. That means one-on-one legislative support, public speaking (to politicians, staffers, media, etc.), grassroots education, up-to-date data and statistics, thought leadership, and plain ‘ole customer service. If we are successful:
ALL of the recipients will have received their kits by July 15th or earlier.
It will take the effort of dozens to bring this to fruition.
This political season is bound to be a bruiser. If we wait until the post-Labor-day session, our message will get lost in the chaff.
The team will have spoken to staff at each of the recipient’s offices.
Media will have gotten the PauseAI story out to 10M+ viewers.
Stretch Goal: meetings with 10% of recipients and/or staffers
Again, the kit should not be looked at as a tool to prompt specific action, but rather as a conversation starter; nothing more or less. We need to talk to our politicians, to help them understand the seriousness of the situation, to offer thought leadership, and ultimately, to help them understand the consequences of their inaction.
[NOT!] LOBBYING
The Gov’t Action Kit is an educational effort. We very much hope that our efforts will inspire positive action, but we must be careful to not step over the line into actual lobbying. The entire purpose of this effort is to lay the groundwork for future concrete efforts—by PauseAI and others—so we will be careful not to muddy the waters.
FUNDING
We’ll need a modest $40,000 to create, package, mail-out and support the first tranche of 800 kits (above and beyond the $10,000 pledged by Project Lead Louis Berman). The money will be obtained through a GoFundMe campaign, grants from providers like Nonlinear Network and internal fundraising amongst our membership.
Under normal circumstances, it would make sense to attempt to raise a larger sum. However, given the nature of the beast—US lobbyists spending was $4.2 billion in 2023, alone; a so-called “off” year—that is not a winning strategy. Our one chance of success will be to leverage our grass roots, not call banks, nor slick brochures, nor payola. “We, the people” are the asset, and in this circumstance, that will have to be enough.
Whatever is raised, excess funds will not go into the general bucket. The plan is to apply every single penny we raise to kit creation and distribution, plus direct related costs such as legal, website, and content development.
As a PauseAI US project, we plan to conform to US law.
One last thing: irrespective of how much money we ultimately raise, an essential part of this effort will involve social proof. Politicians know from money, but what really impresses them is people — especially angry, pissed off, motivated people. It would be far better to raise a single dollar from each and every one of the 1000+ PauseAI members, for instance, than to have a few big donors.
TIMING
In an ideal world, we would raise the money in two weeks and then send out the kits the following week, but the truth of the matter is that a project of this magnitude will take time. As such, it seems prudent to ship the kits in the beginning of July (let’s say July 4th for the patriotic resonance).
Assuming we assemble a core team before April Fool’s Day (and yes, the irony of that date is more than a little apt!), a possible schedule might look like the following:
April: teambuilding, Manifund and other crowdfunding prep, media development, document creation, outreach, research, legal, plan refinement, website and other technical, etc.
May: fundraising, media-relations, team prep, web dev, and media dev
June: purchase materials, assemble and then package the kits
(Early) July: mail and/or hand-out kits out to recipients
Ongoing: outreach and engagement plus stretch goal(s), if funded
TRACTION
In all truth, PauseAI hasn’t had all that much success with grassroots lobbying. It’s true that PauseAI volunteers (the project leaders, included) have met with dozens of politicians in the US and abroad, but “results” have been mixed. In particular, we have yet to see laws or executive action enacted in alignment with our proposal.
More than anything, this lack of success stems from the basic fact that PauseAI’s key message is a tough, tough sell. People do not want to hear that industry’s go-fast-and-break-things ethos is putting every human life at risk. We understand that it is a difficult story, but given the consequences of failure, it is essential that we remain persistent.
Execution-wise, there is more to be hopeful about. Louis has founded and led companies, helped raise venture capital ($150MM in all, in nine different rounds across two firms), and led many successful projects at all scales. Joep, Holly and John are three of the best-known and most-respected leaders in the AI safety community, and the PauseAI membership is second to none in AI-safety thought leadership, outreach, and action.
CREDO
Margaret Mead said it best: Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has.
++BENEFIT
One of the paramount challenges facing volunteer organizations is invigorating their membership base. This involves setting achievable objectives for dedicated core teams, while simultaneously fortifying the organization as a whole. The leadership team firmly believes that undertaking projects of significant scope and scale will profoundly benefit the entire PauseAI membership. Furthermore, this approach will amplify their capacity for future endeavors, enriching the organization's impact and effectiveness in the long run.
DISCLAIMER
There exists a substantial possibility that our endeavor could either falter or fail to meet expectations due to a variety of factors concerning its leadership, goals, breadth, relevance, and execution. These factors include:
Even with flawless execution on our part, the response from the intended audience might be lackluster or indifferent.
We might encounter significant obstacles in achieving our funding objectives, potentially derailing our project before it gains momentum.
Our estimates regarding costs and timelines could be significantly inaccurate, posing serious challenges to the project's feasibility and sustainability.
The way we communicate our message might be flawed, misleading, or even have an adverse effect, diminishing the impact of our efforts.
Predictions and warnings from the AI safety community might not resonate as strongly with policymakers when compared to the persuasive arguments of industry representatives, diminishing our influence.
Our attempt to establish thought leadership in this domain might not inspire confidence or garner the respect we anticipate.
A lack of effective leadership could significantly hinder our progress, leading to disorganization and a loss of focus.
The team might lack the requisite skills, commitment, or cohesion necessary to achieve our objectives.
Our efforts could be severely compromised by industry lobbyists, who, armed with substantial resources, may sway opinion against our initiatives.
The development of AI technologies might pose existential risks that could overshadow or even preclude the realization of our project's goals.
Navigating the complexities of influencing governmental policies and practices might prove to be an insurmountable challenge, limiting our ability to affect meaningful change.
Finally, in an era captivated by technological novelties and advancements, our reliance on science and rational argumentation might struggle to compete against more superficial or sensational appeals.
Notwithstanding these potential challenges, the team remains both prepared and eager to tackle the task at hand. Despite the obstacles outlined, we are steadfast in our belief that the Government Action Kit is an ideal initiative for PauseAI. This project not only aligns seamlessly with our organization's fundamental mission but also presents an excellent opportunity for our volunteer membership to contribute meaningfully. By leveraging our collective expertise, passion, and commitment to the cause, we are confident in our ability to navigate the complexities of this endeavor and make a significant impact in the realm of AI safety and governance.