At @RyanKidd's suggestion, I've reopened this project and am committing half of the ask here at $5k. Looking forward to seeing this course developed and taught to students!
I’m developing a new university course on existential risk. The aim is to present what we might term the 20th century risk landscape into a longer tradition of thinking about collective harm and societal exposure to threats. The focus is on x-risk from artificial intelligence, but it places this scenario in a broader societal and historical context. I expect the course to draw a substantial crowd since this topic is getting more and more attention. The goal is to start educating people on this important matter for our time who can then continue the conversation with friends and groups within and outside of university.
Having completed the course, students should be equipped to reason about and compare risks of environmental, nuclear, and AI origins and understand that while humanity has always been technical and has used technology for evils and for good, risks and threats have, since the mid 20th century, become distinctly more collective and planetary in nature. AI x-risk brings some new matters of concern to this landscape.
The course draws on my 2+ years of experience as a visiting scholar at UC Berkeley and (for the 24-25 academic year) at Stanford, including interactions with scholarly and activist takes on x-risk in the SF Bay Area.
The course would initially be given at the University of Stockholm (where I got my Ph.D. and still teach) and and at the KTH Royal Institute of Technology (where I'm hired as a postdoc). I have a computer science background and a Ph.D. in the History of Ideas.
The course draws on two other courses that I’ve been teaching for some time – “A history of technological critique” and “A history of death and dying”. The first class covers the past two centuries of thinking about the costs and benefits of technology from a societal standpoint and the second class covers a range of attitudes toward human finitude since Ancient Greece. In this class, I have emphasized the uniqueness of nuclear threats and strategies to deal with everything from MAD policies to radiation minimization tactics for citizens because I think that nuclear in the cold war changed the concept of risk dramatically.
In this new course, I’m keen to give students a sense of how, in the mid 20th century, risks and threats expanded significantly both spatially and temporally when the nuclear threat and the environmental crisis entered both policy mindsets and the public space. Interestingly, computers were critical to make nuclear science a reality but also in assessing the environmental problems. Only much later did this technology itself emerge as a risk factor in its own right. This will be the focus of the course syllabus and teaching.
Throughout the latter half of the 20th century until our present day, there has been a complex tension between technology, risk, and prosperity. I want this course to help students to think critically about this dynamic and understand the different philosophical lines of thinking that have developed in risk studies, broadly conceived, since the 1970s.
Incidentally, Sweden has had an interesting part to play in these histories with Arrhenius being the first to scientifically investigate climate change in the late 19thcentury and Sweden being host for the first and influential UN Environmental Summit in 1970 while at the same time, harboring a welfare state which, at the time, was the largest procurer of computer equipment in Northern Europe. Also, the country has raised two of the most prominent voices in current AI risk debates (Bostrom and Tegmark) which were both preceded by the Nobel laureate Hannes Alfvén who in 1966 wrote a short story about AI existential risk – one of the first of its kind.
I will use the funds to plan and develop this course from scratch. As I'm currently writing a book on the history of artificial intelligence from a risk perspective, I already have lots of materials and knowledge in this area: philosophical, technical, ethical, economic, aesthetic, cultural, evolutionary. I'm estimating the work to take about 9 months working 50% to complete.
This is a one-man effort. Naturally, I will be using peers to check quality on content, approach, and pedagogy along the way. Besides the book on AI risk that I'm currently writing, my Ph.D. thesis was on the digitalization of Scandinavia in 1960s and 70s with an emphasis on the emergence of 'data' as a commodity and computers as national infrastructure with increasing vulnerabilities. I'm currently on a 3-year research project on the history of AI from the perspective of errors and mistakes (financed by the Swedish Research Council, which aligns well with this course I'm planning. I've taught several classes on the history of technology, the environment and death and dying, including developing entire courses from scratch for undergraduate and graduate levels.
If this course is not developed and taught, students at Stockholm University and KTH Royal Institute of Technology will have no other dedicated classes to take on this crucial topic of our time. Hence, less young and influential people will be spreading the word about this to the Nordic intellectual and activist scenes.
No other funding at this time.
Austin Chen
about 2 months ago
At @RyanKidd's suggestion, I've reopened this project and am committing half of the ask here at $5k. Looking forward to seeing this course developed and taught to students!
Johan Fredrikzon
about 1 month ago
Hi! I appreciate your re-opening my application – and the support from yourself and Ryan to get this project started!
Austin Chen
4 months ago
Hi Johan! Thanks for posting this application, I like the idea of an official university class on x-risk and you seem well-positioned to work on something like this.
If funded, would you be able to make the course materials publicly available for other professors and teachers to use? I'd love to see other classes follow your lead!
Johan Fredrikzon
4 months ago
@Austin Hi Austin, Thanks for asking. I always try to make course materials available and then others can draw on my work and that of others to shape their own materials. So yes, that's my ambition.
/j