***Watch our exclusive teaser clip of the interview on X/Twitter and LinkedIn***
We spent the last couple weeks in New York and hired a full crew (around 9/10 people on set) to film a professional cinematic interview with Gary Marcus.
To create a cinematic, accessible, feature-length documentary. 'Making God' is an investigation into the controversial race toward artificial general intelligence (AGI).
Our audience is a largely non-technical one, and so we will give them a thorough grounding in recent advancements in AI, to then explore the race to the most consequential piece of technology ever created.
Following in the footsteps of influential social documentaries like Blackfish/Seaspiracy/The Social Dilemma/Inconvenient truth/and others - our film will shine a light on the risks associated with the development of AGI.
We are aiming for film festival acceptance/nomination/wins and to be streamed on the world’s biggest streaming platforms.
This will give the non-technical public a strong grounding in the risks from a race to AGI. If successful, hundreds of millions of streaming service(s) subscribers will be more informed about the risks and more likely to take action when a moment may present itself.
Making God will begin by introducing an audience with limited technical knowledge about recent advancements in AI. Perhaps the only thing some may have used or know about, is ChatGPT since OpenAI launched their website in November 2022. A documentary like this is neglected, as most other AI documentaries assume a lot of prior knowledge.
In giving the audience a grounding in AI advancements and future risks they may pose, we deep dive into the frontier: looking at the individual driving forces behind the race to AGI. We will put a spotlight on the CEOs behind the major AI companies, interview leading experts, speak to those worried in political and civil society.
The documentary will take an objective and truth-seeking approach. The primary goal being to truly understand if we should be worried or optimistic for the coming technological revolution.
We think advanced AI and AGI, if developed correctly and with complementary regulation and governance, can change the world for the better.
We are worried that, as things stand, leading AI companies seem to be prioritizing capabilities over safety, international governance on AI cooperation seems to be breaking down, and technical alignment bets might just not work in time.
We think at minimum a documentary made for people who do not yet know about the risks, aimed at a huge audience (like a streaming service), might help the commons have a better understanding of the risks. Hundreds of millions of people watch their content from streaming services.
At most, we might catalyze a Blackfish/Seaspiracy/Inconvenient Truth-style spirit in the audience, so that one day they might protest/get in touch with their legislator/join a movement, etc.
We have conducted 5 cinematic interviews among civil society, unions, legal experts, and AI experts. We have provided stills, but it should be noted they are yet to be fully edited. But they do give an indication of style and high quality.
In order to increase the likelihood of film festival acceptance and streaming service acquisition thereafter, we need additional funding over the next two months to hire a full production team and gear. Mike filmed these interviews by himself and I (Connor) interviewed.
We went to UCLA for a conference on AI and Nonprofits, and later filmed Rose in her family home. As we entered her home we met her Husband and her two dogs. Rose beamed at us and showed us into her office, hoping we wouldn’t find it too messy. It wasn’t!
When we sat down to interview, Rose spoke to us about: her work in nonprofits; being dragged into the AI conversation through her legal background; the history of OpenAI as a nonprofit; Delaware Public Benefit Corporations and Anthropic; Sam Altman’s firing and OpenAI board hopes and worries; the worries she has for her family around the development of AGI.
On 3 separate 80,000 Hours podcasts she amassed millions of views for her expertise on AI lab nonprofit structures.
Ellen retired last year but came back to UCLA to work at the center focused on nonprofits. Like Rose, she’d been dragged into the world of AI and is worried about its implications for the world. As we turned up to her drive, Ellen ushered us into her family home. We were briefly introduced to her husband, Sunny, a retired lawyer, Ellen asked if we didn’t mind setting up as she called a student and went through their work. Even after retiring it seemed Ellen hadn’t lost her enthusiasm to teach and support.
Ellen spoke to us in her home office about: the incorporated missions of nonprofits; valuing nonprofit AI research labs; her worries about the future; and her optimism about humanity.
She spoke to us in Dolores park at a leafletting session where her and other Pause AI volunteers gave their spare time to educate the public on risks from AI. The public seemed interested in what they had to say, but most smiled and carried on with their day. Holly and other volunteers spoke to us about the reason for protesting. Their solution to mitigating risks from AI is a ‘pause’ on the development of AI.
He spoke to us about: his work in forecasting the development of AI and in particular artificial super intelligence; a brief history of deep learning; what are LLMs?; his work on AI 2027 predicting when ASI might flood the remote job economy; his worries about AGI lab race dynamics; a race to the bottom on AI Safety; US-China race dynamics; his hopes that we slow down a bit to get this right; the burden of predicting possible catastrophe and the rest of the world being seemingly not aware and unprepared.
She spoke to us about her: political campaigning to educate Congressmen and Women on risks from AI; serves on SAG-AFTRA’s New Technology Committee, focusing on protecting actors' rights against AI misuse; she became interested in AI safety in 2020 and has since been advocating for regulations on AI-generated content and deepfakes; job loss concerns, too.
We have also been interviewing the general public about their views on AI and their worries and hopes.
Cristina Criddle, Financial Times Tech Correspondent covering AI - recently broke the Financial Times story about OpenAI giving days long safety-testing rather than months for new models).
Prof. David Krueger.
David Duvenaud, Former Anthropic Team Lead.
John Sherman, Dads Against AI and podcasting.
Jack Clark (we are in touch with Anthropic Press Team).
Prof. Yoshua Bengio (we are in touch with his team).
Geoffrey Hinton (in weaker talks).
Dean Ball (in discussions with the White House AI Advisor).
Kelsey Piper, Vox.
Daniel Kokotajlo, formerly OpenAI.
AI Lab employees.
Lab whistleblowers.
Civil society leaders.
Some Rough Numbers:
Festival Circuit: We are targeting acceptance at major film festivals including Sundance, SXSW, and Toronto International Film Festival, which have acceptance rates of 1-3%.
Streaming Acquisition: Following festival exposure, we aim for acquisition by Netflix, Amazon Prime, or Apple TV+, platforms with 200M+ subscribers collectively. Based on comparable documentary performance, we estimate:
Conservative scenario: 8M viewers (4% platform reach)
Moderate scenario: 15M viewers (7.5% platform reach)
Optimistic scenario: 25M+ viewers (12.5%+ platform reach)
Impact Metrics: We will track:
Viewership numbers across platforms
Pre/post viewing surveys on AI risk understanding
Media coverage and policy discussions citing the documentary
Changes in public opinion polling on AI regulation
Theory of Impact: If successful, we will create an informed constituency capable of supporting responsible AI development policies during potentially critical decision points in the next 2-5 years.
In order to seriously have a chance at being on streaming services, the production quality and entertainment value has to be high. As such, we would need the following funding over the next 3 months to create a product like this.
Accommodation [Total: £30,000]
AirBnB: £10,000 a month for 3 months (dependent on locations for filming and accommodating crew).
Travel [Total: £13,500]
Car Hire: £6,000 for 3 months.
Flights: £4,500 for 3 months (to move us and crews around to locations in California, D.C., and New York) .
Misc: (trains, cabs, etc) £3,000 for 3 months.
Equipment [Total: £41,000]
Purchasing Filming Equipment: £5000
Hiring Filming Equipment
£36,000 (18 shooting days)
Production Crew (30 Days of Day Rate) [Total: £87,000]
Director of Photography: £19,500
Sound Recordist: £18,000
Camera Assistant/Gaffer: £13,500
Additional Crew: £36,000
Director (3 Months): [Total: £15,000]
Executive Producer (3 months): [Total: £15,000]
MISC: £25,000 (to cover any unforeseen costs, get legal advice, insurance and other practical necessities).
TOTAL: £226,500 ($293,046)
Mike Narouei [Director]:
Former Creative Director at Control AI (Directed multiple viral AI Risk films amassing over 60M+ total views over nine months. Watch Your Identity Isn’t Yours - which Mike filmed, produced, and edited when he was at Control AI.
Directed & led a 40-person production team on a £100,000+ commercial, generating 32M views/engagements across social media within one month.
Artistic Director for Michael Trazzi’s ‘SB-1047’ Documentary.
Work featured by BBC, Sky News, ITV News, and The Washington Post.
Partnered with MIT at the World Economic Forum in Davos, demonstrating Deepfake technology live in collaboration with Max Tegmark, covered by The Washington Post & SwissInfo.
Collaborated with Apollo Research to create an animated demo for their recent paper.
Shortlisted for the Royal Court Playwriting Award.
Directed a number of commercials for clients such as Starbucks, Pale Waves and Mandarin Oriental.
Connor Axiotes [Executive Producer]:
Has been on TV multiple times, and has helped to produce videos and TV interviews.
Wrote multiple op-eds for big papers, and blogs. Have a look here for a depository.
Produced viral engagement with millions of impressions on X at Conjecture and the ASI.
He worked as a senior communications adviser for a UK Cabinet Minister, making videos, and interacting with senior journalists and TV channels in coordinating high-stakes and pressure environments.
Wrote the centre-right Adam Smith Institutes’ first ever AI Safety policy paper called ‘Tipping Point: on the edge of Superintelligence’ in 2023.
He worked on a Prime Ministerial campaign and a General Election as part of the then Prime Minister’s operations team. Below he works for the Prime Minister in a media capacity in 2024.
No film festival acceptance.
No streaming service interested in the project.
No-one wants to talk to us for interview [which we are definitely not seeing right now].
$100,000-ish from private philanthropic funders.
Michaël Rubens Trazzi
about 16 hours ago
@Connoraxiotes curious: how much did flying to NYC and having 9/10 people on set cost?
With that burn, how many more interviews can you shoot?
Connor Axiotes
about 16 hours ago
@michaeltrazzi hey!
All in all just above $17,000. 9/10 people on set also included myself, Mike, Gary and our film photographer.
A Netflix shoot is usually minimum $20k, and we want our shoots to be at that level as our aim is to break into film festivals and then streaming services. That's why we had 4 cameras! (Three FX6s and one FX3).
So I would pay this price again though because we have something that is an intriguing interview and super cinematic looking. Which was our purpose with this documentary.
It was a particularly expensive shoot because:
We had less than one week notice to sort the whole shoot and as we are in SF, fly over to NYC and hire a local crew.
Because Gary Marcus was coming to NYC and no longer has an office here, we had to pay for a filming location (when hitherto we've been using people's offices and homes).
Our AirBnB was expensive because there was a chance we might have had to house crew and do some b-roll filming there, so it had to suit both those things.
We had to ship extra luggage on our flights.
I will note that I've been quite successful at getting price reductions on things like crew and gear hire. And I'll continue to fight for value. Mike is also quite experienced with these bigger shoots so he has a good intuition for what should cost what.
Regarding the burn, we also raised $50,000 privately in the last few weeks. So although the interview a significant cost, we thought it was worth it. BUT to finish the documentary we still need to raise a lot.
Michael you should come help us fundraise aha! We could use your expertise.
[Costings below if you'd like a look!]
Austin Chen
about 12 hours ago
@Connoraxiotes Just wanted to say that I very much appreciate this level of transparency and openness about your thinking & finances; I think it's super helpful for others embarking on similar projects!
Seldon
1 day ago
🏴☠️ We are staunch supporters of creating a global movement for existential security and, if this is a success, it will be one of the best ways to kickstart it. We will be watching from the sidelines!
Stephan Wäldchen
2 days ago
This is a great initiative! Sounds like the crew knows what they are doing!
The new pope is also into AIS, maybe you could get him to interview? :D
Chris Akin
29 days ago
I worked with Mike on a project in late 2024,. creating motion graphics for a research demo by Apollo Research. Mike’s work was excellent under the circumstances of tight deadlines, evolving project scope, and technical complexity of the subject matter. I would certainly work with him again
Yashvardhan Sharma
29 days ago
Communication of AI risks is probably the most important thing right now and this seems like a strong team well-positioned to take it on.
Connor Axiotes
about 1 month ago
We spent the last couple weeks on filming for our “Proof of Concept” - to show funders of the quality of our documentary.
We have conducted 5 cinematic interviews among civil society, unions, legal experts, and AI experts. We have provided stills, but it should be noted they are yet to be fully edited. But they do give an indication of style and high quality.
In order to increase the likelihood of film festival acceptance and streaming service acquisition thereafter, we need additional funding over the next two months to hire a full production team and gear. Mike filmed these interviews by himself and I (Connor) interviewed.
Next steps: edit up these 5 interviews to show as our Proof of Concept video; hire full production team for new shoots and reshoots where necessary; get more great interviews; continue fundraising.
We went to UCLA for a conference on AI and Nonprofits, and later filmed Rose in her family home. As we entered her home we met her Husband and her two dogs. Rose beamed at us and showed us into her office, hoping we wouldn’t find it too messy. It wasn’t!
When we sat down to interview, Rose spoke to us about: her work in nonprofits; being dragged into the AI conversation through her legal background; the history of OpenAI as a nonprofit; Delaware Public Benefit Corporations and Anthropic; Sam Altman’s firing and OpenAI board hopes and worries; the worries she has for her family around the development of AGI.
On 3 separate 80,000 Hours podcasts she amassed millions of views for her expertise on AI lab nonprofit structures.
Ellen retired last year but came back to UCLA to work at the center focused on nonprofits. Like Rose, she’d been dragged into the world of AI and is worried about its implications for the world. As we turned up to her drive, Ellen ushered us into her family home. We were briefly introduced to her husband, Sunny, a retired lawyer, Ellen asked if we didn’t mind setting up as she called a student and went through their work. Even after retiring it seemed Ellen hadn’t lost her enthusiasm to teach and support.
Ellen spoke to us in her home office about: the incorporated missions of nonprofits; valuing nonprofit AI research labs; her worries about the future; and her optimism about humanity.
She spoke to us in Dolores park at a leafletting session where her and other Pause AI volunteers gave their spare time to educate the public on risks from AI. The public seemed interested in what they had to say, but most smiled and carried on with their day. Holly and other volunteers spoke to us about the reason for protesting. Their solution to mitigating risks from AI is a ‘pause’ on the development of AI.
He spoke to us about: his work in forecasting the development of AI and in particular artificial super intelligence; a brief history of deep learning; what are LLMs?; his work on AI 2027 predicting when ASI might flood the remote job economy; his worries about AGI lab race dynamics; a race to the bottom on AI Safety; US-China race dynamics; his hopes that we slow down a bit to get this right; the burden of predicting possible catastrophe and the rest of the world being seemingly not aware and unprepared.
She spoke to us about her: political campaigning to educate Congressmen and Women on risks from AI; serves on SAG-AFTRA’s New Technology Committee, focusing on protecting actors' rights against AI misuse; she became interested in AI safety in 2020 and has since been advocating for regulations on AI-generated content and deepfakes; job loss concerns, too.
We have also been interviewing the general public about their views on AI and their worries and hopes.
Cristina Criddle, Financial Times Tech Correspondent covering AI - recently broke the Financial Times story about OpenAI giving days long safety-testing rather than months for new models).
David Kokotajlo, Former Anthropic Team Lead.
John Sherman, Dads Against AI and podcasting.
Jack Clark (we are in touch with Anthropic Press Team).
Gary Marcus (said to get back to him in a couple weeks).
Kelsey Piper, Vox.
Daniel Kokotajlo, formerly OpenAI.
AI Lab employees.
Lab whistleblowers.
Civil society leaders.
The legal interviews focus on Sam Altman and OpenAI as the Professors are legal experts in the field of nonprofit reorganization. Future interviews will focus on other AGI labs, too. Like with the Eli interview, which focuses on the other players in the field.
The stills are from interviews with a 1-man crew (just Mike, our Director). Future stills of future interviews will be even more cinematic with a full (or even half) crews. This is what we need the immediate next funding for.
Connor Axiotes
about 1 month ago
@Connoraxiotes David "Duvenaud" Former Anthropic Team Lead it should have said.
daniel faggella
about 1 month ago
We need to make sure the Political Singularity (people realizing how important creating / controlling AGI god is) happens before the Technological Singularity. Calling out the arms race is a BIG part of getting this done. Get this done, gents.
Austin Chen
about 1 month ago
Approving this project as compatible with Manifund's mission of fostering education on AI risks. I've spoken with Connor and Mike in person and think they are taking a tactical and reasonable approach to making this documentary.
As with my comment on Dads Against AGI, I'd clarify that I personally hold some values dissonance with the grantees here -- for example, I mostly feel that AI labs, and their CEOs like Sam Altman and Dario Amodei, are generally doing good work. But Manifund aims to be a neutral platform, where projects can express a variety of different viewpoints, and so we are happy to facilitate grants to this documentary.
Gábor Várkonyi
about 1 month ago
This is the most important topic today, by far. I often wonder if I will regret not spending all my money on it. This documentary seems to be a pretty important piece to convince the public, I’m also personally interested in it
Nathan Labenz
about 1 month ago
We need more accessible, high-quality education about the AI big picture.
Steven Millblank
about 1 month ago
Impressed by the director’s work, and i know Adam Smith Institute and Control AI have high standards. I think popular movies/documentaries can be powerful force for persuasion and change.
Also, I have agreed to match the next $45k donated. Donate now for double the impact.
Steven Millblank
about 1 month ago
Impressed by the director’s work, and i know Adam Smith Institute and Control AI have high standards. I think popular movies/documentaries can be powerful force for persuasion and change.
Esben Kran
about 1 month ago
I've worked directly with Connor and his dedication to making this truly impactful is inspiring. I am not aware of any project of a similar scope or aim happening right now which is surprising in the face of AI risk awareness. Highly suggest funding this (and similar) projects.
Greg Colbourn
about 1 month ago
I've provided some seed funding for this (outside of Manifund). We really need broad public communication on AI risk to get it further up the political agenda. Something like Seaspiracy -- where they go down the rabbit hole -- but for AI, would be amazing.
Holly Elmore
about 1 month ago
I believe this kind of truly entry level broadcast communication on AI risks is key to clueing the public, who already don’t want to take the massive risks that AI companies are taking, into what’s really going on.