Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
5

Grow An AI Safety Tiktok Channel To Reach Tens Of Millions Of People

Technical AI safetyAI governance
michaeltrazzi avatar

Michaël Rubens Trazzi

ProposalGrant
Closes September 15th, 2025
$12,400raised
$2,000minimum funding
$40,000funding goal

Offer to donate

34 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

In the past two weeks, I have been posting daily AI Safety content on tiktok and youtube reaching more than 1M people.

This grant would pay for my time so I could keep posting daily content on tiktok / youtube until the end of the year (20 weeks left). If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Why this matters: Short-form AI Safety content is currently neglected—most outreach targets long-form YouTube viewers, missing younger generations who get information from TikTok. With 150M active TikTok users in the UK & US, this audience represents massive untapped potential for talent pipeline (e.g., Alice Blair who recently dropped out of MIT to work at Center for AI Safety as a Technical Writer exemplifies the kind of young talent I'd want to reach).

What impact I am planning to get:

  • Base case: Maintaining current momentum (1M views on YT + Tiktok in past two weeks) for 20 weeks would yield 20M AI Safety views by the end of the year.

  • Best case: If I maintain baseline momentum plus achieve one viral video (1M views) every other week OR one semi-viral video (500k views) every week, either scenario adds ~10M views over 20 weeks, resulting in ~30M total AI Safety views by the end of the year. This seems feasible since TikTok's algorithm increasingly recommends content from creators with consistent high-performing videos.

Below Tiktok's performance from Jul 14 to Aug 10:

What are this project's goals? How will you achieve them?

Project Goals:

  1. Reach 20-30M views by the end of the year (as outlined in summary above)

  2. Build an engaged audience of 15,000+ followers through an ecosystem approach: publish a mix of fully safety content, partly/indirectly safety content, and "AI is a big deal" videos to create a funnel where viewers progressively engage with AI safety ideas. Convert the most engaged viewers (those who visit my profile and watch pinned videos) into concrete actions through CTAs and links in bio (eg. to aisafety.com). See comment for more details.

How I'll achieve them:

  1. Post 1-3 clips daily across TikTok and YouTube

  2. Focus on AI-Safety-related interviews, such as Geoffrey Hinton, Sam Altman, Ilya Sutskever, Tristan Harris, Yudkowsky, etc.

  3. Post clips quickly after they appear online to be pushed by the algorithm

How will this funding be used?

This funding will be used to pay for my salary.

$40k for 20 weeks of work means $2k per week, which corresponds to a ~100k / year salary, aka the opportunity cost of going back to working as a ML engineer in France, enabling me to work full-time on this project productively.

Essentially, every $2k pay for one week of work, which in turn translates to ~500k AI Safety views, meaning about $4 / 1000 views. In comparison, to run ads on Tiktok the price would be $5-15 / 1000 views, and even then they would be much less engaged.

Note: If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Who is on your team? What's your track record on similar projects?

Team: Michaël Trazzi.

Track record:

  1. Growing my AI Safety Tiktok to 1M views in one month (with a single clip reaching 500k views) and my Youtube to 470k views (lifetime).

  2. Some examples of clips that have performed especially well on TikTok over the past two weeks:

    1. Tristan Harris on Anthropic's blackmail results (150k views)

    2. Ilya Sutskever on AI being able to do all of human jobs, and making sure artificial superintelligences are honest (152k views)

    3. Daniel Kokotajlo on what happens in fast AI takeoff worlds "It's going to hit humanity like a truck" (54k views)

  3. I've recently edited two AI-safety-related short-form videos (1, 2) for another content creator which ended up being the most watched videos of the entire channel by a large margin (3-4x more views than all the other videos)

  4. Directed the SB-1047 documentary (website), which involved working with and learning from ~4 seasoned video editors for ~6 months.

What are the most likely causes and outcomes if this project fails?

Most likely causes of this project reaching less people than my target would be that:

  1. Some weeks happen to have less interesting content to make clips about than in the past week. Answer: I expect that if this is true for some weeks, other weeks will have more content to make clips than average, which will at least balance it out. In practice, I expect that as we get to the end of the year, people will start talking more about AI, not less, so there will be more clips to be made on average.

  2. The algorithm does not push my videos as much as they have been pushed for the past two weeks. Answer: One reason would be that Tiktok starts pushing content discussing AI less. However, people's personal experience of AI is increasing, and with AGI / Superintelligence being clearly inside of the overton window with content like AI 2027, it seems that the potential audience for these videos will increase, and therefore be pushed more by the algorithm as people engage. Another reason would be that I get somehow shadow banned or similar. In that case I could create a new account or transition to other platforms like We Chat or similar.

How much money have you raised in the last 12 months, and from where?

In the past 12 months I have raised $143k for the SB-1047 documentary (see post-mortem here). The funding was almost entirely from a previous Manifund grant. $20k came from the Future of Life Institute.

Comments10Offers5Similar8
offering $200
Haiku avatar

Nathan Metzger

about 5 hours ago

The generality of this approach is a positive, since public awareness of AI risk itself is likely a prerequisite of good AI policy, which is likely a prerequisite of safe AI development.

offering $8,000
NeelNanda avatar

Neel Nanda

1 day ago

Seems like an interesting project, and impressive reach. What kinds of messages/calls to action do you hope to broadcast?

Also presumably there's a typo above and you mean $10K for 5 weeks not be 10?

🥕

Jesse Richardson

1 day ago

Seconded -- I am interested in this project but want to hear more about what outcomes you hope to achieve from an expanded audience

offering $200
Haiku avatar

Nathan Metzger

1 day ago

I agree. Awareness is good in general, but some of the most watched clips don't really touch on AI Safety, and none of them have calls to action. ("Learn More Here," "Share This," "Call your representatives," etc.)

michaeltrazzi avatar

Michaël Rubens Trazzi

about 8 hours ago

@NeelNanda Yes that was a typo, fixed it!

Regarding messages and outcomes (cc @NeelNanda, @Jesse-Richardson and @Haiku), see below my strategy which includes a diagram summarizing the approach (also included in the main proposal):

  1. Messages: my goal is to promote content that is fully or partly about AI Safety:

    1. Fully AI safety content: Tristan Harris (176k views) on Anthropic's blackmail results, summarizes recent AI safety research in a way that is accessible for most people. Daniel Kokotajlo (55k views) on fast takeoff scenarios, introduces the concept of automated AI R&D, and related AI governance issues. These show that AI Safety content can get high reach if the delivery or editing is good enough.

    2. Partly / Indirectly AI safety content: Ilya Sutskever (156k views) on AI doing all human jobs, the need for honest superintelligence and AI being the biggest issue of our time. Sam Altman (400k views) on sycophancy. These help with general AI awareness that makes viewers receptive to safety messages moving forward.

    3. "AI is a big deal" content: Sam Altman (600k views) talking about ChatGPT logs not being private in the case of a lawsuit. These videos aren't directly about safety but establish that AI is becoming a major societal issue.

The overall strategy here is to prioritize posting fully-safety content that has the potential to have high reach, then go for the partly / indirectly safety content that walks people through why AI could be a risk, and sometimes post some content that is more generally about AI being a big deal, bringing even more people in.

  1. Outcomes: Rather than adding call-to-actions at the end of videos, which unfortunately makes videos much less likely to reach a lot of people on Tiktok (mostly because people would exit instead of re-watching the video) and is quite uncommon to do on tiktok compared to Youtube, especially for clips, I'm expecting the outcomes to be:

    1. Engagement / Following: about 50k people (3-4%) engaged with the content (shares, likes, comments, follows). I expect that the people who engaged will continue seeing my content in the future (because TikTok will push it). In some cases, they will end up engaging more and more with the content that is directly about safety, and eventually integrate the broader AI Safety ecosystem (to a certain degree).

    2. Profile clicks: About 0.5% of viewers click on the channel's profile (I've received 5k+ profile views). The two outcomes from that are:

      1. Watching the pinned videos: 4k views on the 3 pinned videos came from these 5k profile clicks, meaning a large fraction who click on the profile click on pinned. I think in the future one of these pinned videos could be a video with a strong CTA that directly leads to outcomes we care about around informing the public / representatives about AI Safety, similar to this one which had a very high conversion rate in having viewers take action.

      2. Clicking on the link in bio: so far I don't have a clickable link, but plan to link to eg. aisafety.com to redirect to resources to learn more about AI Safety.

    3. Progressive exposure: Most people who eventually work on AI safety needed multiple exposures from different sources before taking action. Even viewers who don't click anywhere are getting those crucial early exposures that add up over time.

offering $8,000
NeelNanda avatar

Neel Nanda

about 8 hours ago

Gotcha, thanks! @michaeltrazzi

That seems a pretty reasonable plan and you've gotten good reach. I'm not confident this is a good idea, but I think that's plausible and more value of information here would be good, so I've donated another month's worth. Good luck!

michaeltrazzi avatar

Michaël Rubens Trazzi

about 8 hours ago

Thanks @NeelNanda !

🥕

Jesse Richardson

about 1 hour ago

Thanks for sharing! My other question is how much time you're spending on this a week? Is the TikTok + YouTube stuff roughly a full-time job at the moment?

offering $200
🍊

Andrew G

1 day ago

Seems like a very promising approach!

offering $2,000
sudonhim avatar

Brenton Milne

1 day ago

Great plan. Donated!