Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
8

Grow An AI Safety Tiktok Channel To Reach Ten Million People

Technical AI safetyAI governance
michaeltrazzi avatar

Michaël Rubens Trazzi

ActiveGrant
$19,200raised
$40,000funding goal

Donate

Sign in to donate

Project summary

In the past month, I have been posting daily AI Safety content on tiktok and youtube reaching more than 1M people.

This grant would pay for my time so I could keep posting daily content on tiktok / youtube until the end of the year (20 weeks left). If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Why this matters: Short-form AI Safety content is currently neglected—most outreach targets long-form YouTube viewers, missing younger generations who get information from TikTok. With 150M active TikTok users in the UK & US, this audience represents massive untapped potential for talent pipeline (e.g., Alice Blair who recently dropped out of MIT to work at Center for AI Safety as a Technical Writer exemplifies the kind of young talent I'd want to reach).

What impact I am planning to get:

  • Base case: Maintaining momentum from the past four weeks (1.3M views on YT + Tiktok, meaning 325k views / week) for 20 weeks would yield 6.5M views by the end of the year.

  • Best case: Maintaining momentum from the past two weeks (1M views on YT + Tiktok in past two weeks, meaning 500k views / week) for 20 weeks, would yield 10M AI Safety views by the end of the year.

Below Tiktok's performance from Jul 14 to Aug 10:

What are this project's goals? How will you achieve them?

Project Goals:

  1. Reach 6.5-10M views by the end of the year (as outlined in summary above)

  2. Build an engaged audience of 15,000+ followers through an ecosystem approach: publish a mix of fully safety content, partly/indirectly safety content, and "AI is a big deal" videos to create a funnel where viewers progressively engage with AI safety ideas. Convert the most engaged viewers (those who visit my profile and watch pinned videos) into concrete actions through CTAs and links in bio (eg. to aisafety.com). See comment for more details.

How I'll achieve them:

  1. Post 1-3 clips daily across TikTok and YouTube

  2. Focus on AI-Safety-related interviews, such as Geoffrey Hinton, Sam Altman, Ilya Sutskever, Tristan Harris, Yudkowsky, etc.

  3. Post clips quickly after they appear online to be pushed by the algorithm

How will this funding be used?

This funding will be used to pay for my salary.

$40k for 20 weeks of work means $2k per week, which corresponds to a ~100k / year salary, aka the opportunity cost of going back to working as a ML engineer in France, enabling me to work full-time on this project productively.

Essentially, every $2k pay for one week of work, which (in the best case) translates to ~500k AI Safety views, meaning about $4 / 1000 views. In comparison, to run ads on Tiktok the price would be $5-15 / 1000 views, and even then they would be much less engaged.

Note: If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Who is on your team? What's your track record on similar projects?

Team: Michaël Trazzi.

Track record:

  1. Growing my AI Safety Tiktok to 1M views in the past month (with a single clip reaching 500k views) and my Youtube to 470k views (lifetime).

  2. Some examples of clips that have performed especially well on TikTok over the past month:

    1. Tristan Harris on Anthropic's blackmail results (150k views)

    2. Ilya Sutskever on AI being able to do all of human jobs, and making sure artificial superintelligences are honest (152k views)

    3. Daniel Kokotajlo on what happens in fast AI takeoff worlds "It's going to hit humanity like a truck" (54k views)

  3. I've recently edited two AI-safety-related short-form videos (1, 2) for another content creator which ended up being the most watched videos of the entire channel by a large margin (3-4x more views than all the other videos)

  4. Directed the SB-1047 documentary (website), which involved working with and learning from ~4 seasoned video editors for ~6 months.

What are the most likely causes and outcomes if this project fails?

Most likely causes of this project reaching less people than my target would be that:

  1. Some weeks happen to have less interesting content to make clips about than in the past week. Answer: I expect that if this is true for some weeks, other weeks will have more content to make clips than average, which will at least balance it out. In practice, I expect that as we get to the end of the year, people will start talking more about AI, not less, so there will be more clips to be made on average.

  2. The algorithm does not push my videos as much as they have been pushed for the past two weeks. Answer: One reason would be that Tiktok starts pushing content discussing AI less. However, people's personal experience of AI is increasing, and with AGI / Superintelligence being clearly inside of the overton window with content like AI 2027, it seems that the potential audience for these videos will increase, and therefore be pushed more by the algorithm as people engage. Another reason would be that I get somehow shadow banned or similar. In that case I could create a new account or transition to other platforms like We Chat or similar.

How much money have you raised in the last 12 months, and from where?

In the past 12 months I have raised $143k for the SB-1047 documentary (see post-mortem here). The funding was almost entirely from a previous Manifund grant. $20k came from the Future of Life Institute.

Comments28Donations8Similar8
michaeltrazzi avatar

Michaël Rubens Trazzi

8 days ago

Thanks to everyone who have donated so far!

Quick update on the first two weeks of this project (Aug 10-Aug 23):
- We've reached 2.6M views: on track for the "best case" scenario outlined above of 500k views / week
- We went from 2k to 14.4 followers: getting really close to the 15k followers target

Highlighted videos:
- Tristan Harris on a country of geniuses in a datacenter, AIs lying and scheming, and the process of building increasingly powerful systems being "insane" (109k views)
- Eric Schmidt talking about what's going to happen in AI in 1-2 years, including AI automation, AI agents and recursive self-improvement (1.2M views)
- Eliezer Yudkowsky saying "If anyone builds it everyone dies, you're not going to solve the alignment problem in the next couple years" (68k views)

donated $800
🥕

Jesse Richardson

14 days ago

I think that this kind of work on building public awareness of AI Safety is important, and Michael has good experience on this from the SB1047 documentary. I have some qualms still about whether $100K/year is the right target number for being paid to do this, but on balance I'm happy to contribute a small amount

michaeltrazzi avatar

Michaël Rubens Trazzi

14 days ago

@Jesse-Richardson Thanks Jesse!

donated $5,000
michaelandregg avatar

Michael Andregg

18 days ago

Trazzi is awesome, and the project seems like a good way to reach a large audience.

michaeltrazzi avatar

Michaël Rubens Trazzi

18 days ago

@michaelandregg Thanks Michael!

michaeltrazzi avatar

Michaël Rubens Trazzi

20 days ago

Update Aug 13:
- Corrected my projections to be more accurate and conservative: now targeting 6.5M-10M views by year end (I had initially made a math error where things were off by a factor of two).
- I've posted some more thoughts on LW / EAF regarding were I'm expecting most of the impact to be, expanding on what I call "progressive exposure".

MarcusAbramovitch avatar

Marcus Abramovitch

20 days ago

Seems interesting though I am somewhat questioning the funding ask here. Is this really full time work that commands a full time 6 figure (annualized) salary to post 1-3 clips a day? And assuming you believe in AI safety sufficiently, if you only get $12.4k (current amount as of this comment) in funding, are you going to just quit posting in 6 weeks and a day?

FWIW, I don't want to single you out, I have this kind of critique of many, many people doing AI safety work but this just seems like a striking example of it.

michaeltrazzi avatar

Michaël Rubens Trazzi

20 days ago

@MarcusAbramovitch Thanks for the questions. Let me address both points:

On the work involved: I spend 5-6 hours a day going through multiple podcasts to find the very best clips, where most of them don't end up being posted. There's also editing / uploading work (2-3 hours) on top of that that is hard to see (adapting things from vertical vs. horizontal by scaling / position / changing backgrounds, iterating on different captions, fixing the audio, fixing subtitles, upscaling, uploading to different platforms, checking on different devices). It definitely gets to a full day of work, especially as I do more clips.

6 figure (annualized) salary: I've spent some time thinking about how much money I would ask to pay for my time on this grant. One important word above is "work full-time on this project productively". I did consider other amounts that would mean basically only paying bills and nothing else, but I don't think that'd have been sustainable nor helpful to make this project go well.

To give more context, last year I made the mistake of under-budgeting on my salary for the SB-1047 documentary (see post-mortem here), which meant that I basically paid myself for only 2 of the ~8-9 months I spent on this. One lesson I learned from this is that compensating yourself for your time is not just a cherry on top after you have everything else figured out, but something necessary to work on something productively for extended periods of time.

> And assuming you believe in AI safety sufficiently, if you only get $12.4k (current amount as of this comment) in funding, are you going to just quit posting in 6 weeks and a day

After 6 weeks of full-time work, I'd evaluate options to maximize the project's continued impact: transitioning to part-time while fundraising, mentoring someone to continue, or documenting my process for others to pick it up.

I appreciate the clarifying questions, let me know if you need anything else.

🌳

Cian

19 days ago

@MarcusAbramovitch Yeah, I also like the idea, but $100-$300 per tiktok clip seems weirdly expensive. If you drop this due to insufficient funding I hope someone picks up the idea as a hobby

donated $800
🥕

Jesse Richardson

19 days ago

Yeah I basically have a similar qualm to Marcus, curious to see if this reply has assuaged his concerns as right now I'm fairly on the fence about contributing.

michaeltrazzi avatar

Michaël Rubens Trazzi

19 days ago

@cian You're right that the higher end of your range sounds high framed this way. The high-reach clips (>100k views) are great ROI, but without averaging things out and looking at heavy tails (most of the value comes from the small amount of clips that take off, and these continue to pay dividends as you post many of them), the ones that don't take off feel pricey. If that helps: TikTok's algorithm favors posting 1-3x daily max, so I actually cut 3-5 clips and only post the best ones, so you're not just paying for what gets posted.

@Jesse-Richardson Thanks for still engaging while still being on the fence. Curious what your main concerns are and if there's any more questions I can answer. Is it the compensation level, the time commitment, or something else? Also, happy to explore different structures if that helps, though not sure if I can accommodate people individually directly on Manifund, since I'd still need to be fair with people who donated here.

MarcusAbramovitch avatar

Marcus Abramovitch

18 days ago

@ I want to quickly note that it's a bit unfair for me to specifically only call you out on this or rather, that this is a thing I find with many AI safety projects. It just came up high on Manifund when I logged on for other reasons and saw donations from people I respect.

That said, no, this doesn't exactly assuage my concerns/make me want to donate here. I don't think this work really takes that much time but more importantly, I just don't think it passes the cost-effectiveness bar that I hold my donations and my regrants to given the other opportunities I know of that are out there.

I don't think people doing "non-profit" work should have to make some kind of bare minimum amount to barely cover expenses but I do think that $100k/year to repost short clips on tiktok shows that spending habits in EA have gotten a bit out of control and we should do a bit of a sanity check. I also don't think that the correct metric here is that it's reasonable to pay the equivalent salary of an ML engineer in France. I think people usually/often inflate their earning potential but more importantly, I just don't think it's reasonable for non-profit work to get paid your maximum earning potential.

I think there are two relevant cost-effectiveness factors to consider in grantmaking: raw impact per dollar, and also the reasonable cost to reproduce. I feel uncertain on the former here (and I'm going to be doing a project to address this), but on the latter, I feel pretty confident that I could get 4-5x this impact by having low-cost EAs in 2nd/3rd world countries or college students doing this work.

Based on what you wrote, I think you might feel undercompensated for previous work (on the SB 1047 documentary) and thus want to recoup some of that on this work but I don't think that is what philanthropic funders should do here. Perhaps I'm somewhat open to retroactive funding for the documentary if I/others investigated but I don't think it's good to

donated $2,000
Austin avatar

Austin Chen

18 days ago

quick notes:

  • I love that Michael is willing to be so transparent and responsive here. The quality of comments here are great; thanks everyone for weighing in!

  • I think highly of all of Marcus, Jesse, and Michael -- and even so, I think it's reasonable for them (and others) to have strongly different takes on whether a particular project is worth funding/doing.

    • I think sometimes EA people fall into a trap of thinking there's one "right answer" about cost effectiveness or theory of change, and we need to argue it out to get to ground truth -- and while there's some value in this, there's also a lot of value in enabling lots of experiments to be executed by the people who are most excited by those. Manifund is built around the latter philosophy: "let a hundred projects bloom"!

  • As for myself, I funded this for a small amount, to signal support for Michael based on his past track record. I think the SB1047 documentary was well done (though, I was also a funder of that, so might be biased). I think the viewership numbers here are impressive, though I'm not super calibrated on what passes for amazing media presence.

    • Contrary to Marcus, I suspect EA ought to pay people a lot more, though there's a lot of nuance ofc. I'm also very sympathetic to viewing funding this as "retro funding for the SB1047 documentary".

    • My biggest question is whether short clips of existing content are providing value to the world; how much do you get to shape the narrative if you're curating vs creating? It's not a model I'm familiar with, anyways but I do think it's on the rise.

michaeltrazzi avatar

Michaël Rubens Trazzi

15 days ago

@MarcusAbramovitch I get that you want to achieve the highest impact per dollar and you'd want to find the most cost-effective option. However, I’d just like to offer some nuances / corrections to the points you’re making.

re 4-5x more impact per $:

  1. You’re not only paying for the time spent on working on a project, but also the initial traction the project has, which might take a couple months to reproduce, or fail entirely. In other words, you’ll need to pay for the initial “warming up” phase, which is uncertain, and won’t have impact right away. And I believe the failure rate to get similar results in 1-2 months will be high.

  2. Relatedly, you’ll need to find someone motivated enough to work with a low amount of views at the start, and that would commit to keep on working on this for months. This difficulty is exacerbated by the fact that you're also wanting to be 4-5x more cost-effective, so paying people 4-5x less. (This can be achieved in cheaper countries, but you're then trading other things like talent density).

  3. I think this job requires a mix of skills from different fields that aren’t things that you can’t quickly pickup. Having personally tried to explain AI concepts to many freelancers with 10-20+ years of video editing experience, and conversely tried to explain short-form video editing to people with AI experience, I can guarantee that for similar projects to be successful you’d actually need people with a mix of both skills, which in my experience is quite rare, or would take a least ~1 month to mentor (but then you’d also need to find the mentor, which would bring you back to square one).

re "maximum earning potential": I think you're right to challenge the distinction between for-profit ML engineering salaries and non-profit video work, and I'll update the proposal accordingly. The actual datapoints I should have given are that: 1) this is already the rate people have been happy to compensate me when working with non-profit safety orgs to do video work. 2) when I do contract for for-profits I do actually tend to ask for more, both for video work and ML engineering work (at least ~30% more).

re "recoup": I was giving the documentary example to argue that I am indeed motivated by AI Safety. I might ask for retroactive funding later one, but this particular grant proposal is not trying to "recoup" anything.

michaeltrazzi avatar

Michaël Rubens Trazzi

15 days ago

@Austin Thanks for the kind words. The comments here have also been helpful for me to clarify how I'm thinking about things, overall red-teaming the proposal.

re short clips of existing content shaping the narrative: I do think that if you actually wanted to shape the narrative in a profound way, producing original content would be a necessary condition to be able to convey exactly the message that you'd want to share, which would be something that no other creator is doing.

I do however think that there is some value in amplifying content that is already posted that is currently neglected, and that to be able to do it well requires specialized skills (as I've argued in more depth in my answer to Marcus).

To give an intuition pump: I see this work as amplifying the impact of people who are already doing original work, but have not spent enough time (because they're time-constrained, not that interested in it, or don't have the skills) looking at all the possible claims / packagings that could be extracted from their work.

So basically you're taking something raw (an interview) and you're trying to extract the what important message there would resonate the most with some particular audience (say younger generations on TikTok) through being packaged correctly for a given algorithm. And if done correctly, I think this multiplier effect could be quite impactul and also deserves funding.

MarcusAbramovitch avatar

Marcus Abramovitch

14 days ago

@Austin, I also like that he's being responsive. I also think many different POVs can be reasonable. I did not mean for my comments to tell people that they shouldn't in any case donate/give money here. I'm mainly adding my take here. My only point is that money is limited/has opportunity costs and I was mainly explaining why I chose not to.

We can discuss elsewhere on appropriate EA salaries.

@Michael sure, I agree I would need to pay for this warm up period and for them to scale up. I disagree though that this is highly skilled work that couldn't be picked up. I think your average EA who is interested in the topic could do this.
I didn't mean my comments to say "you should return this money". Lots of grants/spending in EA ecosystems I consider to be wasteful, ineffective etc. And again, apologies for singling you out on a gripe I have with EA funding.

donated $1,000
drewspartz avatar

Drew Spartz

10 days ago

@MarcusAbramovitch After studying the base rates on content creation a lot, I would argue that it IS highly skilled. It normally takes years to get any sort of meaningful traction on Youtube because it is extremely competitive. I think Michael's traction there would put him in the top ~2% of channels.

I've talked to maybe 100 EAs interested/working in content creation/media, and Michael is easily top 5 in terms of knowledge and how good his takes are. So I would be more confident that he's much more well positioned to tackle this than the average EA. Content is also very power law distributed, where a video that is 10% better than the competition might not receive 10% more views, but 10 times as many views.

Wouldn't be surprised if this project is getting 10m views per month within a few months.

donated $200
Haiku avatar

Nathan Metzger

20 days ago

The generality of this approach is a positive, since public awareness of AI risk itself is likely a prerequisite of good AI policy, which is likely a prerequisite of safe AI development.

donated $8,000
NeelNanda avatar

Neel Nanda

22 days ago

Seems like an interesting project, and impressive reach. What kinds of messages/calls to action do you hope to broadcast?

Also presumably there's a typo above and you mean $10K for 5 weeks not be 10?

donated $800
🥕

Jesse Richardson

21 days ago

Seconded -- I am interested in this project but want to hear more about what outcomes you hope to achieve from an expanded audience

donated $200
Haiku avatar

Nathan Metzger

21 days ago

I agree. Awareness is good in general, but some of the most watched clips don't really touch on AI Safety, and none of them have calls to action. ("Learn More Here," "Share This," "Call your representatives," etc.)

michaeltrazzi avatar

Michaël Rubens Trazzi

21 days ago

@NeelNanda Yes that was a typo, fixed it!

Regarding messages and outcomes (cc @NeelNanda, @Jesse-Richardson and @Haiku), see below my strategy which includes a diagram summarizing the approach (also included in the main proposal):

  1. Messages: my goal is to promote content that is fully or partly about AI Safety:

    1. Fully AI safety content: Tristan Harris (176k views) on Anthropic's blackmail results, summarizes recent AI safety research in a way that is accessible for most people. Daniel Kokotajlo (55k views) on fast takeoff scenarios, introduces the concept of automated AI R&D, and related AI governance issues. These show that AI Safety content can get high reach if the delivery or editing is good enough.

    2. Partly / Indirectly AI safety content: Ilya Sutskever (156k views) on AI doing all human jobs, the need for honest superintelligence and AI being the biggest issue of our time. Sam Altman (400k views) on sycophancy. These help with general AI awareness that makes viewers receptive to safety messages moving forward.

    3. "AI is a big deal" content: Sam Altman (600k views) talking about ChatGPT logs not being private in the case of a lawsuit. These videos aren't directly about safety but establish that AI is becoming a major societal issue.

The overall strategy here is to prioritize posting fully-safety content that has the potential to have high reach, then go for the partly / indirectly safety content that walks people through why AI could be a risk, and sometimes post some content that is more generally about AI being a big deal, bringing even more people in.

  1. Outcomes: Rather than adding call-to-actions at the end of videos, which unfortunately makes videos much less likely to reach a lot of people on Tiktok (mostly because people would exit instead of re-watching the video) and is quite uncommon to do on tiktok compared to Youtube, especially for clips, I'm expecting the outcomes to be:

    1. Engagement / Following: about 50k people (3-4%) engaged with the content (shares, likes, comments, follows). I expect that the people who engaged will continue seeing my content in the future (because TikTok will push it). In some cases, they will end up engaging more and more with the content that is directly about safety, and eventually integrate the broader AI Safety ecosystem (to a certain degree).

    2. Profile clicks: About 0.5% of viewers click on the channel's profile (I've received 5k+ profile views). The two outcomes from that are:

      1. Watching the pinned videos: 4k views on the 3 pinned videos came from these 5k profile clicks, meaning a large fraction who click on the profile click on pinned. I think in the future one of these pinned videos could be a video with a strong CTA that directly leads to outcomes we care about around informing the public / representatives about AI Safety, similar to this one which had a very high conversion rate in having viewers take action.

      2. Clicking on the link in bio: so far I don't have a clickable link, but plan to link to eg. aisafety.com to redirect to resources to learn more about AI Safety.

    3. Progressive exposure: Most people who eventually work on AI safety needed multiple exposures from different sources before taking action. Even viewers who don't click anywhere are getting those crucial early exposures that add up over time.

donated $8,000
NeelNanda avatar

Neel Nanda

21 days ago

Gotcha, thanks! @michaeltrazzi

That seems a pretty reasonable plan and you've gotten good reach. I'm not confident this is a good idea, but I think that's plausible and more value of information here would be good, so I've donated another month's worth. Good luck!

michaeltrazzi avatar

Michaël Rubens Trazzi

21 days ago

Thanks @NeelNanda !

donated $800
🥕

Jesse Richardson

20 days ago

Thanks for sharing! My other question is how much time you're spending on this a week? Is the TikTok + YouTube stuff roughly a full-time job at the moment?

michaeltrazzi avatar

Michaël Rubens Trazzi

20 days ago

@Jesse-Richardson Yes it's full-time.

I wrote down more details in my answer to Marcus here.

donated $200
🍊

Andrew G

22 days ago

Seems like a very promising approach!

donated $2,000
sudonhim avatar

Brenton Milne

22 days ago

Great plan. Donated!