Thank you for sharing your concerns.
How is suing AI companies in court less likely to cause conflict than the 'good cop' approach you deride?
Suing companies is business as usual. Rather than focus on ideological differences, it focusses on concrete harms done and why those are against the law.
Not that I was talking about conflicts between AI Safety communities like the AI ethics those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).
Some amount of conflict with AGI lab folks is inevitable. Our community’s attempts to collaborate with the labs to research the fundamental control problems first and to carefully guide AI development to prevent an arms race did not work out. And not for lack of effort on our side! Frankly, their
reckless behaviour now to reconfigure the world on behalf of the rest of society needs to be called out.
Are you claiming that your mindset and negotiation skills are more constructive?
As I mentioned, I’m not arguing here for introducing a bad cop. I’m arguing for starting lawsuits to get injunctions arranged for widespread harms done (data piracy, model misuses, toxic compute).
What leverage did we have to start with?
The power imbalance was less lopsided. When the AGI companies were in their start-up phase, they were relying a lot more on our support (funding, recruitment, intellectual support) than they do now.
For example, public intellectuals like Nick Bostrom had more of an ability to influence narratives than they do now. Now AGI labs have ratcheted up their own marketing and lobbying and in that way crowd out the debate.
few examples for illustration, but again, others can browse your Twitter:
Could you clarify why those examples are insulting for you?
I am pointing out flaws in how the AI Safety community has acted in aggregate, such as offering increasing funding to DeepMind, OpenAI and then Anthropic. I guess that’s uncomfortable to see in public now, and I’d have preferred that AI Safety researchers had taken this seriously when I expressed concerns in private years ago.
Similarly, I critiqued Hinton for letting his employer Google scale increasingly harmful models based on his own designs for years, and despite his influential position, still not offering much of a useful response now to preventing these developments in his public speaking tours. Scientists in tech have great power to impact the world, and therefore great responsibility to advocate for norms and regulation of their technologies.
Your selected quotes express my views well. I feel you selected them with care (ie. no strawmanning, which I appreciate!).
I think there's some small chance you could convince me that something in this ballpark is a promising avenue for action. But even then, I'd much rather fund you to do something like lead a protest march than to "carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases.
Thank you for the consideration!