Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
6

Grow An AI Safety Tiktok Channel To Reach Ten Million People

Technical AI safetyAI governance
michaeltrazzi avatar

Michaël Rubens Trazzi

ActiveGrant
$12,400raised
$40,000funding goal

Donate

Sign in to donate

Project summary

In the past month, I have been posting daily AI Safety content on tiktok and youtube reaching more than 1M people.

This grant would pay for my time so I could keep posting daily content on tiktok / youtube until the end of the year (20 weeks left). If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Why this matters: Short-form AI Safety content is currently neglected—most outreach targets long-form YouTube viewers, missing younger generations who get information from TikTok. With 150M active TikTok users in the UK & US, this audience represents massive untapped potential for talent pipeline (e.g., Alice Blair who recently dropped out of MIT to work at Center for AI Safety as a Technical Writer exemplifies the kind of young talent I'd want to reach).

What impact I am planning to get:

  • Base case: Maintaining momentum from the past four weeks (1.3M views on YT + Tiktok, meaning 325k views / week) for 20 weeks would yield 6.5M views by the end of the year.

  • Best case: Maintaining momentum from the past two weeks (1M views on YT + Tiktok in past two weeks, meaning 500k views / week) for 20 weeks, would yield 10M AI Safety views by the end of the year.

Below Tiktok's performance from Jul 14 to Aug 10:

What are this project's goals? How will you achieve them?

Project Goals:

  1. Reach 6.5-10M views by the end of the year (as outlined in summary above)

  2. Build an engaged audience of 15,000+ followers through an ecosystem approach: publish a mix of fully safety content, partly/indirectly safety content, and "AI is a big deal" videos to create a funnel where viewers progressively engage with AI safety ideas. Convert the most engaged viewers (those who visit my profile and watch pinned videos) into concrete actions through CTAs and links in bio (eg. to aisafety.com). See comment for more details.

How I'll achieve them:

  1. Post 1-3 clips daily across TikTok and YouTube

  2. Focus on AI-Safety-related interviews, such as Geoffrey Hinton, Sam Altman, Ilya Sutskever, Tristan Harris, Yudkowsky, etc.

  3. Post clips quickly after they appear online to be pushed by the algorithm

How will this funding be used?

This funding will be used to pay for my salary.

$40k for 20 weeks of work means $2k per week, which corresponds to a ~100k / year salary, aka the opportunity cost of going back to working as a ML engineer in France, enabling me to work full-time on this project productively.

Essentially, every $2k pay for one week of work, which (in the best case) translates to ~500k AI Safety views, meaning about $4 / 1000 views. In comparison, to run ads on Tiktok the price would be $5-15 / 1000 views, and even then they would be much less engaged.

Note: If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Who is on your team? What's your track record on similar projects?

Team: Michaël Trazzi.

Track record:

  1. Growing my AI Safety Tiktok to 1M views in the past month (with a single clip reaching 500k views) and my Youtube to 470k views (lifetime).

  2. Some examples of clips that have performed especially well on TikTok over the past month:

    1. Tristan Harris on Anthropic's blackmail results (150k views)

    2. Ilya Sutskever on AI being able to do all of human jobs, and making sure artificial superintelligences are honest (152k views)

    3. Daniel Kokotajlo on what happens in fast AI takeoff worlds "It's going to hit humanity like a truck" (54k views)

  3. I've recently edited two AI-safety-related short-form videos (1, 2) for another content creator which ended up being the most watched videos of the entire channel by a large margin (3-4x more views than all the other videos)

  4. Directed the SB-1047 documentary (website), which involved working with and learning from ~4 seasoned video editors for ~6 months.

What are the most likely causes and outcomes if this project fails?

Most likely causes of this project reaching less people than my target would be that:

  1. Some weeks happen to have less interesting content to make clips about than in the past week. Answer: I expect that if this is true for some weeks, other weeks will have more content to make clips than average, which will at least balance it out. In practice, I expect that as we get to the end of the year, people will start talking more about AI, not less, so there will be more clips to be made on average.

  2. The algorithm does not push my videos as much as they have been pushed for the past two weeks. Answer: One reason would be that Tiktok starts pushing content discussing AI less. However, people's personal experience of AI is increasing, and with AGI / Superintelligence being clearly inside of the overton window with content like AI 2027, it seems that the potential audience for these videos will increase, and therefore be pushed more by the algorithm as people engage. Another reason would be that I get somehow shadow banned or similar. In that case I could create a new account or transition to other platforms like We Chat or similar.

How much money have you raised in the last 12 months, and from where?

In the past 12 months I have raised $143k for the SB-1047 documentary (see post-mortem here). The funding was almost entirely from a previous Manifund grant. $20k came from the Future of Life Institute.

Comments14Donations5Similar8
MichelJusten avatar

Michel Justen

Video essay on risks from AI accelerating AI R&D

Help turn the video from an amateur side-project to into an exceptional, animated distillation

AI governanceGlobal catastrophic risks
1
5
$0 raised
StandardAsset avatar

Shawn Kulasingham

‘Build Responsibly’ | A Documentary Making AI Safety Engaging for the Public.

Fund a project that aims to reach millions of viewers in perpetuity. Help create the future of AI Comms.

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
2
0
$0 / $150K
michaeltrazzi avatar

Michaël Rubens Trazzi

Making 52 AI Alignment Video Explainers and Podcasts

EA Community Choice
8
9
$15.3K raised
Connoraxiotes avatar

Connor Axiotes

'Making God': a Documentary on AI Risks for the Public

Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
18
41
$205K raised
JeroenWillems avatar

Jeroen Willems

A Happier World (YouTube channel promoting EA ideas)

A Happier World explores exciting ideas with the potential to radically improve the world. It discusses the most pressing problems and how we can solve them.

EA community
5
8
$2.79K raised
CeSIA avatar

Centre pour la Sécurité de l'IA

Scaling AI safety awareness via content creators

4M+ views on AI safety: Help us replicate and scale this success with more creators

Technical AI safetyAI governanceGlobal catastrophic risks
12
11
$21.3K raised
Eko avatar

Marisa Nguyen Olson

Building an AI Accountability Hub

Case Study: Defending OpenAI's Nonprofit Mission

Technical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
2
$0 raised
JacquesThibodeau avatar

Jacques Thibodeau

Jacques Thibodeau - Independent AI Safety Research

3 month salary for AI safety work on deconfusion and technical alignment.

Technical AI safety
6
2
$0 raised