Project summary
This year, researchers (eg. Katja Grace) started exploring the possibility of slowing AI scaling.
Communities on the frontline already work to restrict increasing harms:
⒈ Digital freelancers whose copyrighted 𝚍̲𝚊̲𝚝̲𝚊̲ (art, photos, writing) are copied to train AI models used to compete in their market.
⒉ Product safety engineers/auditors identifying a neglect of comprehensive design and safety tests for intended 𝚞̲𝚜̲𝚎̲𝚜̲.
⒊ Environmentalists tracking a rise in toxic emissions from hardware 𝚌̲𝚘̲𝚖̲𝚙̲𝚞̲𝚝̲𝚎̲.
I’m connecting leaders and legal experts of each community.
We’re identifying cases to restrict AI data piracy, misuses, and compute. Court injunctions are a time-tested method for restricting harmful corporate activity, and do not require new laws or international cooperation.
At our Brussels meeting on data piracy, a longtermist org’s head of litigation decided to 𝘱𝘳𝘰𝘣𝘢𝘣𝘭𝘺 hire 2 experts to evaluate European legal routes.
Meanwhile, I am raising funds to cover a starter budget.
Many considerations are below, but I’ve also intentionally left out some details.
If you have specific questions, feel free to plan a call: calendly.com/remmelt/30min/
Project goals
Let me zoom out before zooming in:
HIGH-LEVEL CRUXES
Most AI Safety researchers who have studied the AGI control problem for a decade or longer, seem to have come to a similar conclusion, derived through various paths of reasoning:
Solving the problem comprehensively enough for effectively unbounded-optimizing machinery would at our current pace take a minimum of many decades. eg. see Soares’ elegant argument on serial alignment efforts: lesswrong.com/posts/vQNJrJqebXEWjJfnz/a-note-about-differential-technological-development
Researchers like Yampolskiy also investigated various fundamental limits to AGI controllability. Landry and I even argue that the extent of available control is insufficient to prevent long-term convergence on extinction: lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
For the grant’s purpose, the distinction between whether the AGI control problem would be really hard or unsolvable does not really matter.
What matters:
⒈ AI safety researchers who thought deeply and worked extensively on the control problem for a decade or longer have concluded that relatively little to no progress has been made.
⒉ Over the last decade, AI companies made much more “progress” commensurately on scaling neural-network-based architectures to the point of now being economically-efficiently capable of emulating many human cognitive tasks, and starting to be used to replace human workers.
Argument:
⒊ If we could slow or restrict corporate AI-scaling, this would give AI Safety researchers the time to catch up on developing safe control methods, or find out that control is fundamentally intractable.
⒋ Counterargument:
But can we slow AI-scaling, really? Even if we could, is there any approach that does not get us embroiled in irrational political conflicts?
~ ~ ~
POSSIBLE APPROACHES
Approaches that risk causing unforeseen side-effects and political conflicts:
- to publicize books or media articles about ‘powerful godlike AI’.
- to lobby for regulations that take as a given the continued development of 'general purpose-AI’.
- to fund or research safety on the margin inside AGI R&D labs or government AI taskforces, who use safety results as a marketing and recruitment tool.
Specifically, risks around:
1. Epistemics:
Misrepresenting the problem in oversimplified ways that encourage further AI development, and obfuscate the corporate incentives and technical complexity involved.
2. Allowances:
Opening up paths for corporations to be given a free pass (eg. through safety-washing, national interests) to continue scaling AI models for influence and profit.
⒊ Conflicts:
Initiating a messaging ‘tug of war’ with other communities advocating to prevent increasing harms (eg. artists and writers, marginalized communities targeted with the use of tech, and AI ethics researchers acting on their behalf).
Contrast with another approach:
Sue AI companies in court.
1. Legal cases force each party to focus on evidence of concrete damages and on the imposing of costs and restrictions on the party that causes the damage. It also gets at the economic crux of the matter – without enforcing costs and injunctions against harms done, AI companies will compete to scale models that can be used for a wider variety of profitable purposes.
2. Unlike future risks, harms are concrete and legally targetable (twitter.com/RemmeltE/status/1666513433263480852). Harms are easier to cut at their source, such to block off paths to extinction too. Multi-stakeholder governance efforts around preventing ambiguously-defined future risks leaves room/loopholes for AI corporations to keep scaling ahead anyway.
⒊ Offering legal funding for injunctions against harms would inspire cross-collaborations between the AI Safety community and other cash-strapped communities who have started to get cross with us.
This no-nonsense response to the increasing harms would even heal divides within AI Safety itself, where some (like me) are very concerned about how much support the community has offered to AGI R&D labs in exchange for surmised marginal safety improvements (forum.effectivealtruism.org/posts/XZDSBSpr897eR6cBW/what-did-ai-safety-s-specific-funding-of-agi-r-and-d-labs).
David Kreuger aptly describes the mindset we are concerned about:
❝ ‘These guys are building stuff that might destroy the world. But we have to…work with them to try and mitigate things a little bit.’
❝ As opposed to just saying: ‘That’s wrong. That’s bad. Nobody should be doing it. I’m not going to do it. I’m not going to be complicit.’
(twitter.com/DavidSKrueger/status/1669999795677831169)
The AI Safety community has been in “good cop” mode with AGI R&D labs for a decade, while watching those labs develop and scale training of AlphaZero, GPT, Claude, and now Gemini. In the process, we lost much of our leverage to hold labs accountable for dangerous unilateralist actions (Altman, Amodei or Hassabis can ignore at little social cost our threat models and claim they have alignment researchers to take care of the “real” risks).
I won’t argue for adding in a “bad cop” for balance.
Few individuals in AI Safety have the mindset and negotiation skills to constructively put pressure onto AGI R&D labs, and many want to maintain collaborative ties with labs like Anthropic.
Funding legal cases though seems a reasonable course of action – both from the perspective of researchers aiming to restrict pathways to extinction, and from the perspective of communities already harmed by unchecked AI data scraping, model misuses, and environmentally toxic compute.
~ ~ ~
FIRST STEPS
The goal is to restrict three dimensions over which AI companies consolidate power and do harm:
1. data piracy.
2. misuses of models for profit and for/with geopolitical influence.
⒊ compute toxic to the environment.
See also this report: ainowinstitute.org/general/2023-landscape-executive-summary#:~:text=three%20key%20dimensions%3A
Rough order of focus:
1. Data piracy is the first focus, since there are by my count now 25+ organizations acting on behalf of communities harmed by the extraction and algorithmic misuse of their data by AI companies. Copyright, privacy, and work contract violations are pretty straight-forward to establish, particularly in the EU.
2. Lawsuits against AI engineering malpractice and negligent uses of AI models come next. Liability is trickier to establish here (given ToS agreements with API developers, and so on) and will need advice and advocacy from product safety experts from various industries (a medical safety device engineer and I are starting those conversations).
⒊ Finally, while climate change litigation has been on the rise (lse.ac.uk/granthaminstitute/publication/global-trends-in-climate-change-litigation-2022/), I expect it will take years for any litigating organization to zero in on the increasing CO₂ emissions and other eco-toxic effects caused by corporate reinvestments in mines, fab labs, and server farms.
Over the last six months, I along with diverse collaborators have built connections with organizations acting to restrict AI data piracy.
I talked with leaders of the Concept Art Association, European Guild for AI Regulation, National Association for Voice Actors, The Authors Guild, Distributed AI Research Institute, among others. We are also connecting with experienced lawyers through several organisations.
Last month, I co-organised the Legal Actions Day in Brussels (hosted by the International Center for Future Generations).
Two results from our meetings:
1. Legal prioritization research:
The head of litigation of a longtermist organization decided to probably hire two legal experts for a month to research and weigh up different legal routes for restricting data scraping in the EU. This depends on whether the organization receive funding for that – an evaluator working for a high net-worth donor is now making the case to the donor.
2. Pre-litigation research for an EU copyright case:
Two class-action lawsuits have been filed against OpenAI for copyright infringement in their scraping of creatives’ works (buttericklaw.com). Surprisingly, no copyright lawsuits representing creatives have been started yet in Europe against major AI companies (complicating factor: class-action lawsuits have only just been introduced, but only for consumers). We are working with the Good Lobby to arrange pro-bono lawyers for initial legal advice, but eventually we will need to pay attorneys.
Up to now, I was coordinating this in my spare time. But my funding just ran out at AI Safety Camp. It looks likely I can arrange funding from one or more high net-worth donors in 3+ months’ time, but only after the legal prioritization research has been done.
I’m therefore asking for a grant to cover the gap, so I can dedicate time to carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases.
How will this funding be used?
First to coordinate this project:
$40,000 to cover 6 months of pay for Remmelt (alternatively, to fund one of below).
Then, if more funding available, pay for one of these:
$40,000 to pay Tony Trupia for ongoing campaigning as pro se litigant against OpenAI.
$45,000 for a pre-litigation research budget for a class-action lawsuit by data workers against OpenAI currently being prepared by a law group.
$45,000 for a pre-litigation research budget for a European copyright case.
$48,000 for 2 legal experts (4 weeks at ~$150/h) to do Europe-focussed legal prioritisation research at a longtermist org.
Note:
- I also submitted an application to Lightspeed, covering my pay only.
- I expanded our starter budget here, but kept it humble given that most regrantors seem to focus on technical solutions.
- The more funding, the faster we can facilitate new lawsuits to restrict data scraping and then model misuses. This is a new funding niche that can absorb millions of dollars (I have various funding opportunities lined up).
- Once we raise funds beyond $85K, I intend to set up a fiscally sponsored account through Player's Philanthropy Fund for holding uncommitted legal funds. We will then also recruit a panel of legal experts for advising legal routes, taking inspiration from digitalfreedomfund.org.
What is your (team's) track record on similar projects?
Please see end of 'project goals' for initial coordination work I carried out.
I am an experienced fieldbuilder, having co-founded and run programs for AI Safety Camp and EA Netherlands.
How could this project be actively harmful?
If we prepare poorly for a decisive legal case, that may result in the judge dismissing the arguments. This in turn could set a bad precedent (for how judges apply the laws to future cases) and/or hinder a class of plaintiffs from suing the same AI company again.
Also, details could leak out to AI companies we are intending to sue, allowing them to prepare a legal defense (which btw is why I'm sparing about details here).
What other funding is this person or project getting?
None at the moment, except for compensation of my hours spent coordinating legal action workshops.