I’m writing a book on the history of artificial intelligence (AI) and catastrophic risk for a general audience. It’s in Swedish, which is my native language, but can be read by people in Denmark and Norway as well, extending its reach to the Nordic region. The book builds largely on my experiences as a postdoc at UC Berkeley since 2022 (on a grant from the Swedish Research Council to study the history of AI from the perspective of errors and mistakes): going to conferences, participating in workshops and seminars, talking to interest groups etc.
The book is designed to be accessible for audiences with no prior knowledge of artificial intelligence, computer science or statistics nor of the problems related to risk assessments or the philosophy of existential risk. I try to make the content come alive with anecdotes from my interactions with people in and around Silicon Valley, but the main ambition is to give a fair historical and societal account of these matters. My aim is to establish what is at stake in thinking about substantial threats and the role that artificial intelligence might have in these concerns. Being affiliated with KTH Royal Institute of Technology in Stockholm since 2022, I am able to draw on some of the work done by my colleagues there, which is focused on the risks associated with nuclear as well as climate change. Clearly, relating these different forms of threats will benefit readers in their understanding of x-risk from AI.
Topics that this book will take on, include:
-epistemological uncertainty
-histories of ignorance
-cultures of prediction
-the production of global risks in the 20th century
-the long history of automation
-the making of autonomy
-the concept of agents
-the birth of computer security
-persuasion as a business model
-deception as a strategy
-the distraction of ethics
-catastrophic and existential risk
-the progress of technology from the perspective of the accident
-the human factor
-the complexity argument
-the problem of intentionality
-the ongoing battle of risk definitions (short/long term; local/global; technical/social)
-the alignment problem as seen from xAI and interpretability
-philosophical and literary takes on evolutionary AI
-governance, “pause letters” and regulation
-a critique of optimization
The book will be published in Swedish and be accessible to a Nordic readership. The goal is to educate the general public on how AI presents new kinds of risks to society and mankind and to encourage people to compare these risks with others that may be more familiar, such as climate change and nuclear war. In addition to this, the book seeks to tell a nuanced history of artificial intelligence highlighting the philosophical and ethical aspects involved in trying to build smart machines and how these discourses have varied over time.
I'm already under contract with Fri Tanke to publish this book, but I need additional funds to find the time to a finish it. The funding will be used to pay for my additional research, drafting and writing time as well as completing the editorial process with the publisher.
This is a one-man effort. Because I have a Computer Science background, I can follow the technical aspects of the AI space which I combine with perspectives from philosophy and theory from my doctorate in the History of Ideas. I have several years of experience in designing and giving classes on the history of technology and its critics as well as on the history of radiation and nuclear power/weapons as forms of existential threats. I currently teach a course on the history of death and dying as well as one on the history of futurism and futures studies in the 20th century – both remotely for Stockholm University. I have published in both these subject areas and I take active part in the Nordic debates on AI futures by publishing opinion pieces in the leading papers. (Incidentally, the first publication about the atomic winter was in a Swedish journal and the country has fostered some of the most articulated voices in the recent x-risk space: Tegmark and Bostrom).
While many in the Nordic countries read English, to really reach a broad audience you need to address people in their native language. If this project is not completed, this important audience will lack knowledge to participate successfully in the ongoing discussion about AI from a risk perspective. Arguably, there is a democratic issue if only elites are able to grasp the consequences of choices presently being made in the tech industries.
I will receive a small honorarium from the publisher for writing this book.
No other funding at this point.