You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I'm planning a project to write the history of artificial intelligence (AI) as an existential risk (x-risk). I estimate this as a 2–3 year full-time research project.
I seek $5K to buy time (1 month full-time) off from other projects/teaching to write a proposal to get the project funded.
The notion that AI poses an existential risk, i.e. one that could cause the collapse of human civilization, is controversial and contested. Skeptics argue that the existential-threat discourse distracts from more immediate, tangible harms of AI, as well as diverting from the climate crisis. Yet, the project has no stake in determining whether AI x-risk is real or not. Having made its way into the highest offices of global decision-making is enough to warrant it as a prioritized object of study, not least since we have been unable to properly account for its emergence, i.e. we lack knowledge of how this development took place over time.
The aim of this undertaking is, therefore, to chart the historical process, intellectually and culturally, by which existential risk became a dominant frame for thinking about artificial intelligence.
Tentatively, the full project is organized as three case studies. The first (1870–1970) is an in-depth study of speculative and fictional work about machines eliminating humans. The second (1989–2004) explores the expansion of futurist/transhumanist/posthumanist thinking in philosophical communities. The third case study (2005–2025) scrutinizes AI x-risk thinking as it matures into academic research programs and policy work in AI safety/alignment/global x-risk studies.
The project insists that we must understand the history of dreams and nightmares about our technological future if we wish to strengthen our influence over the direction society is currently taking.
This research project stands out by reconstructing the circumstances that made AI as x‑risk thinkable in the first place (as opposed to either promote or debunk the very notion that thinking machines might be the end of humanity, which is typically the case with similar projects). The project argues, instead, that the belief in AI and an existential risk was neither inevitable as a phenomenon nor is it a distraction from other kinds of risks that critics of view lineage have claimed. Instead, the project will study this phenomenon as a historically contingent emergence that we need to understand better.
This project will generate a competitive project proposal to secure funding for a 2-3 year research project to write the history of AI as x-risk.
The estimated 1-month proposal writing builds on a course curriculum on AI as x-risk that I designed as a postdoc at Stanford 2024–25 which was partially funded by Ryan Kidd at Manifund (10K).
From this course design, and from having collaborated with people at Stanford Existential Risk Initiative (SERI), I have a good sense of the general history of AI as x-risk: key literature, essential sources, individuals to interview etc.
Now, I need to put it all together in a competitive project proposal.
Seeking 5K to buy me 1 month of time off from other projects/teaching to write the project funding proposal. This proposal will be directed toward Coefficient giving (OpenPhil) as well as ERC starting grant (EU) and major Nordic research funding orgs (equivalents to NSF, NEH, and Mellon).
This is a 1-month, one-man effort to design a project proposal to secure funding. The project, if funded, will be carried out in collaboration with Stockholm University and the Institute for Futures Studies in Stockholm. I have invitations to do on-site work at SERI (Stanford) and CSER (Cambridge).
I have taught and published on histories of computing, environmental modeling, nuclear geopolitics, and artificial intelligence for more than a decade. I am under contract with Nordic publisher Fri Tanke to write a book on the history of AI. In the past four years, I have worked (mostly in the SF Bay area) on project about the importance of errors in the history of AI: from algorithmic breakdown to backpropagation as error-correction to the problem of the "human factor" vis-a-vis machine failure modes. I have presented papers at numerous conferences in the EU and US. I regularly give invited talks and sit on panels regarding the history of AI: it's ideas, methods, and paradigms – in particular pertaining to questions of risk, existential and otherwise.
In 2022–24, I was a postdoc at UC Berkeley and 2024–25, I was a visiting scholar at Stanford. Prior to my Ph.D. in the History of Ideas, I have a Master in Computer Science.
If I cannot find the time to write this project proposal, I will not be able to secure funding for this larger project that I'm planning. If the 1-month proposal writing is indeed funded by Manifund, yet the proposal is not accepted, I will revise and resubmit and/or seek other partners to fund the 2–3 year research required to produce solid history of AI as x-risk.
I have a smaller grant (1,2K) from a Swedish fund that has been useful in starting to organize materials and notes from the course curriculum I developed in 2024–25.