[retracted]
@michaeltrazzi
$0 in pending offers
Michaël Rubens Trazzi
3 days ago
Thanks to everyone who have donated so far!
Quick update on the first two weeks of this project (Aug 10-Aug 23):
- We've reached 2.6M views: on track for the "best case" scenario outlined above of 500k views / week
- We went from 2k to 14.4 followers: getting really close to the 15k followers target
Highlighted videos:
- Tristan Harris on a country of geniuses in a datacenter, AIs lying and scheming, and the process of building increasingly powerful systems being "insane" (109k views)
- Eric Schmidt talking about what's going to happen in AI in 1-2 years, including AI automation, AI agents and recursive self-improvement (1.2M views)
- Eliezer Yudkowsky saying "If anyone builds it everyone dies, you're not going to solve the alignment problem in the next couple years" (68k views)
Michaël Rubens Trazzi
10 days ago
@Austin Thanks for the kind words. The comments here have also been helpful for me to clarify how I'm thinking about things, overall red-teaming the proposal.
re short clips of existing content shaping the narrative: I do think that if you actually wanted to shape the narrative in a profound way, producing original content would be a necessary condition to be able to convey exactly the message that you'd want to share, which would be something that no other creator is doing.
I do however think that there is some value in amplifying content that is already posted that is currently neglected, and that to be able to do it well requires specialized skills (as I've argued in more depth in my answer to Marcus).
To give an intuition pump: I see this work as amplifying the impact of people who are already doing original work, but have not spent enough time (because they're time-constrained, not that interested in it, or don't have the skills) looking at all the possible claims / packagings that could be extracted from their work.
So basically you're taking something raw (an interview) and you're trying to extract the what important message there would resonate the most with some particular audience (say younger generations on TikTok) through being packaged correctly for a given algorithm. And if done correctly, I think this multiplier effect could be quite impactul and also deserves funding.
Michaël Rubens Trazzi
10 days ago
@MarcusAbramovitch I get that you want to achieve the highest impact per dollar and you'd want to find the most cost-effective option. However, I’d just like to offer some nuances / corrections to the points you’re making.
re 4-5x more impact per $:
You’re not only paying for the time spent on working on a project, but also the initial traction the project has, which might take a couple months to reproduce, or fail entirely. In other words, you’ll need to pay for the initial “warming up” phase, which is uncertain, and won’t have impact right away. And I believe the failure rate to get similar results in 1-2 months will be high.
Relatedly, you’ll need to find someone motivated enough to work with a low amount of views at the start, and that would commit to keep on working on this for months. This difficulty is exacerbated by the fact that you're also wanting to be 4-5x more cost-effective, so paying people 4-5x less. (This can be achieved in cheaper countries, but you're then trading other things like talent density).
I think this job requires a mix of skills from different fields that aren’t things that you can’t quickly pickup. Having personally tried to explain AI concepts to many freelancers with 10-20+ years of video editing experience, and conversely tried to explain short-form video editing to people with AI experience, I can guarantee that for similar projects to be successful you’d actually need people with a mix of both skills, which in my experience is quite rare, or would take a least ~1 month to mentor (but then you’d also need to find the mentor, which would bring you back to square one).
re "maximum earning potential": I think you're right to challenge the distinction between for-profit ML engineering salaries and non-profit video work, and I'll update the proposal accordingly. The actual datapoints I should have given are that: 1) this is already the rate people have been happy to compensate me when working with non-profit safety orgs to do video work. 2) when I do contract for for-profits I do actually tend to ask for more, both for video work and ML engineering work (at least ~30% more).
re "recoup": I was giving the documentary example to argue that I am indeed motivated by AI Safety. I might ask for retroactive funding later one, but this particular grant proposal is not trying to "recoup" anything.
Michaël Rubens Trazzi
14 days ago
@cian You're right that the higher end of your range sounds high framed this way. The high-reach clips (>100k views) are great ROI, but without averaging things out and looking at heavy tails (most of the value comes from the small amount of clips that take off, and these continue to pay dividends as you post many of them), the ones that don't take off feel pricey. If that helps: TikTok's algorithm favors posting 1-3x daily max, so I actually cut 3-5 clips and only post the best ones, so you're not just paying for what gets posted.
@Jesse-Richardson Thanks for still engaging while still being on the fence. Curious what your main concerns are and if there's any more questions I can answer. Is it the compensation level, the time commitment, or something else? Also, happy to explore different structures if that helps, though not sure if I can accommodate people individually directly on Manifund, since I'd still need to be fair with people who donated here.
Michaël Rubens Trazzi
15 days ago
Update Aug 13:
- Corrected my projections to be more accurate and conservative: now targeting 6.5M-10M views by year end (I had initially made a math error where things were off by a factor of two).
- I've posted some more thoughts on LW / EAF regarding were I'm expecting most of the impact to be, expanding on what I call "progressive exposure".
Michaël Rubens Trazzi
15 days ago
@Jesse-Richardson Yes it's full-time.
I wrote down more details in my answer to Marcus here.
Michaël Rubens Trazzi
15 days ago
@MarcusAbramovitch Thanks for the questions. Let me address both points:
On the work involved: I spend 5-6 hours a day going through multiple podcasts to find the very best clips, where most of them don't end up being posted. There's also editing / uploading work (2-3 hours) on top of that that is hard to see (adapting things from vertical vs. horizontal by scaling / position / changing backgrounds, iterating on different captions, fixing the audio, fixing subtitles, upscaling, uploading to different platforms, checking on different devices). It definitely gets to a full day of work, especially as I do more clips.
6 figure (annualized) salary: I've spent some time thinking about how much money I would ask to pay for my time on this grant. One important word above is "work full-time on this project productively". I did consider other amounts that would mean basically only paying bills and nothing else, but I don't think that'd have been sustainable nor helpful to make this project go well.
To give more context, last year I made the mistake of under-budgeting on my salary for the SB-1047 documentary (see post-mortem here), which meant that I basically paid myself for only 2 of the ~8-9 months I spent on this. One lesson I learned from this is that compensating yourself for your time is not just a cherry on top after you have everything else figured out, but something necessary to work on something productively for extended periods of time.
> And assuming you believe in AI safety sufficiently, if you only get $12.4k (current amount as of this comment) in funding, are you going to just quit posting in 6 weeks and a day
After 6 weeks of full-time work, I'd evaluate options to maximize the project's continued impact: transitioning to part-time while fundraising, mentoring someone to continue, or documenting my process for others to pick it up.
I appreciate the clarifying questions, let me know if you need anything else.
Michaël Rubens Trazzi
16 days ago
@NeelNanda Yes that was a typo, fixed it!
Regarding messages and outcomes (cc @NeelNanda, @Jesse-Richardson and @Haiku), see below my strategy which includes a diagram summarizing the approach (also included in the main proposal):
Messages: my goal is to promote content that is fully or partly about AI Safety:
Fully AI safety content: Tristan Harris (176k views) on Anthropic's blackmail results, summarizes recent AI safety research in a way that is accessible for most people. Daniel Kokotajlo (55k views) on fast takeoff scenarios, introduces the concept of automated AI R&D, and related AI governance issues. These show that AI Safety content can get high reach if the delivery or editing is good enough.
Partly / Indirectly AI safety content: Ilya Sutskever (156k views) on AI doing all human jobs, the need for honest superintelligence and AI being the biggest issue of our time. Sam Altman (400k views) on sycophancy. These help with general AI awareness that makes viewers receptive to safety messages moving forward.
"AI is a big deal" content: Sam Altman (600k views) talking about ChatGPT logs not being private in the case of a lawsuit. These videos aren't directly about safety but establish that AI is becoming a major societal issue.
The overall strategy here is to prioritize posting fully-safety content that has the potential to have high reach, then go for the partly / indirectly safety content that walks people through why AI could be a risk, and sometimes post some content that is more generally about AI being a big deal, bringing even more people in.
Outcomes: Rather than adding call-to-actions at the end of videos, which unfortunately makes videos much less likely to reach a lot of people on Tiktok (mostly because people would exit instead of re-watching the video) and is quite uncommon to do on tiktok compared to Youtube, especially for clips, I'm expecting the outcomes to be:
Engagement / Following: about 50k people (3-4%) engaged with the content (shares, likes, comments, follows). I expect that the people who engaged will continue seeing my content in the future (because TikTok will push it). In some cases, they will end up engaging more and more with the content that is directly about safety, and eventually integrate the broader AI Safety ecosystem (to a certain degree).
Profile clicks: About 0.5% of viewers click on the channel's profile (I've received 5k+ profile views). The two outcomes from that are:
Watching the pinned videos: 4k views on the 3 pinned videos came from these 5k profile clicks, meaning a large fraction who click on the profile click on pinned. I think in the future one of these pinned videos could be a video with a strong CTA that directly leads to outcomes we care about around informing the public / representatives about AI Safety, similar to this one which had a very high conversion rate in having viewers take action.
Clicking on the link in bio: so far I don't have a clickable link, but plan to link to eg. aisafety.com to redirect to resources to learn more about AI Safety.
Progressive exposure: Most people who eventually work on AI safety needed multiple exposures from different sources before taking action. Even viewers who don't click anywhere are getting those crucial early exposures that add up over time.
Michaël Rubens Trazzi
27 days ago
@Austin Thanks!
I should clarify: I don't think "presenting at The Curve" slowed things down per se.
I think in a lot of ways it was positive (eg. in having a clear intermediary deadline, and I also met people there who introduced me to people who ended up donating a significant amount).
What slowed things down a little bit was 1) not starting to work with people I would want to work on the final product with from the start 2) "software decisions" where we worked in not-very modular ways before the conference, which implies we had to duplicate a lot of the work we did there.
I think there would have been ways to work in a more modular / long-term way from the start, and I take responsibility there.
Also, having a "MVP" (or "rough cut" in the movie world) was also instrumental in contracting more experienced folks, so having that "conference cut" from the Curve was maybe a necessary step in hiring more senior people after all.
Michaël Rubens Trazzi
27 days ago
SB-1047 Documentary Post-mortem
This documentary took 27 weeks and $157k instead of my planned 6 weeks and $55k. Here's what I learned about documentary production
Total funding received: ~$143k ($119k from this grant, $4k from Ryan Kidd's regrant on another project, and $20k from the Future of Life Institute).
Total money spent: $157k
In terms of timeline, here is the rough breakdown month-per-month:
- Sep / October (production): Filming of the Documentary. Manifund project is created.
- November (rough cut): I work with one editor to go through our entire footage and get a first rough cut of the documentary that was presented at The Curve.
- December-January (final cut - one editor): I interview multiple potential editors that would work on the final cut, and decide on one candidate who would be the one that does most of the editing (from December to February).
- February-March (final cut - 7 Full-Time Equivalents): I work with a total of 7 seasoned professionals (working full-time) to have a finished documentary by the end of March. This is the most capital intensive period of the post-production phase.
- April: We wait back to hear from multiple distributors about whether they would be interested in publishing the documentary on their platform. Multiple outlets show strong interest (New York Times Op-Docs or Wired) but the content of the documentary doesn't fit publication policies.
- May: The documentary is published on May 5th.
Breaking down how the money was spent:
- Editing was the largest part of the expenses, since I ended up paying for a total of 4 different editors, that have worked from November to March included. From February to March I had multiple editors working on the documentary in parallel.
- Motion graphics was the second largest item, with two people working on motion graphics in February and March
- In terms of music & sound, the documentary used custom music made by a composer, with some of the songs played by real instruments, but also required the work of a seasoned sound mixer, which is why this is the third most expensive item.
- The director salary ended up representing only ~9% of the total expenses, since I had originally planned to pay myself $15k for 10 weeks, but the project ended up taking ~27 weeks instead.
But why did the project end up taking 27 weeks instead of 6 weeks?
- Short answer: I ended up getting more funding than I originally had asked for on Manifund, and had to hire many different professionals with that funding. Having to present something intermediary at the conference "The Curve" potentially slowed us down. And a lot of the steps had to happen one after the order, including all of the fundraising, hiring, multiple steps of post-production, on top of the distribution of phase where we had to wait to hear back from potential distributors. All of this considered made that the movie take ~5 months to be ready (and 6 months to be out) instead of 6 weeks.
- Long answer (breaking out month by month):
--> In November I was offered to do a first screening of the documentary at the AI conference The Curve, which I imagined to be a great way to present a first draft of the movie as we were working on it. However, in order to get this first draft done on time (for the conference), I had to hire an editor with whom I did not end up working with throughout the project, and most of the work was rushed using a software that we did not use in the future, meaning a lot of the work we did in November was not directly re-usable in later months.
--> In December a lot of time was spent trying to find a film editor that would be willing to work full-time with us on the project. In the end I was really satisfied with the editor we ended up with, but then came the end of year Holidays, so not a lot of editing happened then. On top of that, our funding was constrained enough that I would not have been able to hire more people to work on this, which would make my only editor much less productive compared to other environments where he would have an assistant editor, an archival producer or motion graphics person to help him.
--> In January, an important bottleneck was still not being able to hire more people because of funding constraints. But also the wildfires happening in LA, which directly impacted our main editor. When funding was secured, a lot of time was spent finding the right people that would join in February, including a composer, an archival producer, two motion graphic designers, and two editors. Some other things that slowed us down here was that the software and organisation decisions we had taken in earlier months were starting to slow us down, so had to do some large refactoring, and transition software.
--> In February most of the editing for the documentary happened, with 7 people working full-time on it. By the end of the month, we were able to have something that was almost done in terms of story and bytes.
--> In March most of the music, motion graphics and sound happened, because all of this work required the edit to be locked-in (also called "picture lock").
--> In April the movie was ready to be published, but we spent most of the month waiting for the potential distributors (NYT, Wired, LA Times, etc.) to get back to us. Other work involved upscaling and marketing.
--> In May the documentary was published on Youtube at the beginning of the month.
Impact:
- The documentary achieved 2,500 hours of watch time (25% retention rate) across 20k YouTube views, and 100k views on X
- The documentary got presented at AI Conference "The Curve" and I'm in talks to present it at the UK Parliament (depending on funding).
- I've heard from multiple news outlets (including Wired, NYT) or filmmakers that the film was very well edited and deserved to be on streaming platforms.
- As a consequence of that, I've recently started submitting the documentary to movie festivals.
- There is still potential work I want to be doing involving directly sending / showing this documentary to lawmakers in the UK / US (depending on funding).
What I would do differently next-time:
- I would try to work with a fixed timeline and realistic corresponding budget. A lot of the problems I've had with this project was that I wanted to finish a documentary quickly with not enough staff, and had to wait to fundraise more before hiring more people. Once I had enough staff, getting people to work quickly and finish the project was much easier.
- I would dedicate more funding to marketing / distribution, and go through the movie festival route first. A lot of the issues I was having towards the end for distribution is that getting potential distributors to watch your documentary is a process that takes time, and a lot more time needs to be allocated to send your movie through the movie festival pipeline, to get increased coverage down the line. I think one of the reasons the movie got less views on Youtube than expected is that people do not expect to watch a movie on Youtube and the content would have performed better on a streaming platform where they'd expect a documentary.
- In terms of the movie itself, I would spend more time at the start of the movie discussing exactly what is AI Safety, why AI is a big deal, what AI regulation is, talking to a wider audience. Given that the documentary was published on Youtube, it is necessary to explain those terms in a much more pedagogical way so that most people that had not heard of SB-1047 would have been able to understand what was happening and why they should care.
- I would focus on figuring out distribution first. I am already now in touch with distributors that would be happy to work with me from the beginning of the project next time, instead of having to convince later to distribute the project. This guarantees funding and distribution from the start.
- I would start with all of my team already figured out. Now that I have already talked to >50 seasoned professionals and contracted 16 of them, I would be able to get started with professionals I trust from day one for my next project, which would have saved me about 2 months on this project.
- I would also work in person instead of remotely, which I think would have saved me one month.
Michaël Rubens Trazzi
4 months ago
@Connoraxiotes curious: how much did flying to NYC and having 9/10 people on set cost?
With that burn, how many more interviews can you shoot?
Michaël Rubens Trazzi
4 months ago
OK so this took way longer than expected to get out.
I'll post another update later detailing everything that happened since November but for now the full documentary is available to watch on X and Youtube.
Thanks again to everyone who donated! Your name should be in the credits according to your donation tier.
Michaël Rubens Trazzi
9 months ago
Nov 26th Update:
- After 4 weeks of editing, hiring multiple editors & other contractors (sound mixing, production coordinator, video labeling), we presented a 1h05m rough cut of the SB-1047 documentary at "The Curve" on Sat 23
- We observed an increase of 1.4 points of understanding after watching the documentary, which satisfies our goal or increasing the understanding of different positions held on SB-1047 (we polled 21 participants before and after the screening)
- We observe a weak correlation between the initial position of participants on the bill, and how likely they are to recommend the documentary to a friend. The fact that the correlation is weak is positive, showcasing that the documentary was recommended (& not recommended) by people with different positions.
- I no longer believe that it is possible to finish this project in "6 weeks" as previously proposed for the funding level that we reached
- My current estimate for when the movie will be out is now at best late January 2025, if not early February
- I believe that many of the key players who would have benefited from seeing the documentary earlier rather than later have already benefited from watching the rough cut at the curve, or will hear about it from there and could see it upon request or at targeted screenings.
- My new goal is now to have a final cut published before a new bill is announced or just after it is announced (estimated date for when the next bill is announced: February)
- My previous budget estimation were too optimistic and did not take into account the cost & time of motion graphics / animation, the cost of hiring a composer / music supervisor, and the cost of hiring an experienced video editor to go from a rough cut to a final cut for 2+ months (which seems to be the minimum I am currently being quoted for).
- Therefore, we are still funding constrained, and I expect that any funding above $55k will be invested towards paying contractors for the post-production (say for animation, music and video editing), with video editing and animation being the two main costs, and music being slightly lower. (Note that video editing was to some extent included in the original manifund project, but animation was not included at all, so this is where I'd expect most extra marginal funding to go).
Michaël Rubens Trazzi
12 months ago
Offering a token of appreciation since I learned a lot reading 80k's blog, listening to Rob Wiblin or doing a coaching call. Same as Neel, donating a small amount as token of appreciation, since this is already a large org,
Michaël Rubens Trazzi
12 months ago
Token of appreciation since I have personally benefited from Lightcone's work when visiting Berkeley.
Michaël Rubens Trazzi
12 months ago
A lot of people I interact with regularly have done the MATS program and received a lot of value from it.
Similar to Neel below I want to give a token of appreciation, voting for the quadratic funding, given that MATS is a large funding opportunity.
Michaël Rubens Trazzi
12 months ago
update: I've now thought more about it, watched more videos, and the videos seem to be getting good engagement and liron has been posting these quite regularly, demonstrating his willingness to grow.
I think the "reacting to video" aspect is a niche that is not currently filled and is worth doing.
I don't especially like focusing too much on basic arguments regarding "optimizers" or "paperclips" (mentioned in the Shapiro and Bret Weinstein videos I watched) and would prefer more recent terminology (say discussing things specific to language models / SoTA and how the current paradigm could become risky), but I want to donate some amount now to show some support, possibly adding more later as I get more evidence.
Michaël Rubens Trazzi
12 months ago
@GauravYadav This looks much better, thanks.
I've now watched the two videos you linked in full and gave feedback through the appropriate channels.
Offering a small amount for now to signal support. Could imagine doing a bigger donation, especially if more donations happen (to bring the project closer to minimum funding), or if there's new evidence that comes up (say you publish a new video before the "24 days left to contribute" that I think is especially promising).
Michaël Rubens Trazzi
12 months ago
Note: donating a small amount now to signal support, might add more later when I have more clarity on how I want to spread my donations.
I think a world in which no advocacy for an AI Pause would be worse so I am glad that they exist and organize protests.
Main reasons I'm excited
- I think 2.1k / volunteer is quite cheap for the amount of work a volunteer could do a year
- I've interviewed Holly Elmore from PauseAI US who I overall trust with the project
- I think their discord is already quite active, and the 2k "members" (discord members?) and 100 registered volunteers show some decent engagement
Some reservations
- When I look at the protest lists (https://pauseai.info/protests), it seems that there hasn't been that many protests in 2024 organized by PauseAI.
- I don't think any PauseAI protest has reached enough participation to have say 100+ people showing up at one location, which I think would be more impressive than what I've currently seen
Overall, I'm glad that PauseAI exists and appreciate all of the efforts that are being done in that space.
Michaël Rubens Trazzi
about 1 year ago
@liron Thanks for the quick & detailed answers.
Since I wrote that comment I've watched the beginning of most of your current videos, and I'd like to add a few updates for me:
I did look at the engagement (comments & likes) on current videos and it does seem like a positive signal for having a strong core audience. I guess to get the full picture it would be good to have the number of dislikes to see if it's just that the video topics are controversial, but from the comments it seems people do seem to value your work, which is again a good sign.
it seems that most of the content is "Liron react" rather than "Doom debate", meaning you're reacting to what someone said instead of debating. I think debates are harder to do since you'd need to have people agree to debate you and schedule it, and it would also be more interesting to watch. But reacts are things that don't exist in the space so fill the "commentary" niche and is possibly a way to get a debate afterwards.
the production quality is quite good on your side, though say when you're reacting to david shapiro he his only inside a circle (i guess for copyright issues?) which I think could slightly be improved
Right now, given your channel is quite new and the engagement, I think I've updated towards the potential growth you're described being possible.
I guess last conflicted thoughts I'd have are on the theory of change. If say more david shapiro folks watch your reacts / debates, or more generally people do understand the basic arguments for why "doom" is likely, what do you expect them to do as a result?
My understanding is that your audience would most likely not be that technical, so the goal is potentially to have people more concerned about AI risk to say shift the public opinion on AI legislation, or do call to actions for the PauseAI movement? If so would be good to have data on that (eg a form on how much their perspectives have shifted or how many PauseAI discord signups from your channel).
Anyway, thanks again for the thorough answer. I'll try watching a full Liron react and potentially see your final reply to this before coming up with a donation decision.
Michaël Rubens Trazzi
about 1 year ago
I have watched some of the first two initial videos and I think this is really promising.
Responsible Scaling Policies and SB-1047 are two things that I think would be great to have an easy-to-digest video format and I am glad Gaurav already did some of the work with his two initial videos.
Some reservations I have:
- 1. I think at this stage your setup looks fine and I'd be more interested in seeing what kind of results you get without a fancy setup than say spending $ on expensive audio / camera equipment.
- 2. More generally, if I donate say $50 I'd prefer this $50 to go directly to one hour of your time than say contribute for a $1k camera or similar. Today our phones are generally as good or better than $1k cameras.
- 3. However, I could see a version of this where you could break down exactly what kind of cheap green screen + light + software you'd want to use, and say like "I really think that I'd need $300 to buy this $200 light, $40 green screen, and subscribe to Adobe Premiere Pro for 3 months ($20 * 3) which would make my life really easier as a friend of mine could teach me Premiere Pro."
- 4. I think the goals in "What are this project's goals? How will you achieve them?" are not quantifiable as is.
Example of quantiable goals:
"""
My goal is to try to see if I could produce valuable explainers on AI Governance. I plan to make one video on compute governance, one video on the EU AI Act, and one final video on another legislative update as it occurs, or if nothing important happens I'd be default talk about [Insert video idea you want to do anyway]. If by [date] I don't get [X people telling me they found it more than 8/10 valuable] or [Y views] then I'd consider I am not a good fit.].
"""
I should say that really the two videos were quite engaging and I ended up watching a good chunk (few minutes), and that overall this looks quite promising. Simply looking for more clarity on the precise needs & concrete plan.
Michaël Rubens Trazzi
about 1 year ago
I think communicating AI risk through debate with a podcast & youtube channel is valuable. I have seen some episodes of Liron debating with people I'd qualify as "tech optimists" (George Hotz, Theo Jaffe, Guillaume Verdon) and I can confirm that he has the "stamina" and patience to go through these debates (as he mentions here).
My current main reservations are:
- framing it as a "doom debate" or a confrontaional "doomers vs non doomers" which could potentially cause harm (making the movement be seen more like an advesarial apocalyptic cult rather than a technical field)
- one of his debate with Guillaume Verdon (around his startup) might have been counterproductive (he repeated the same question about "inputs / outputs", which I think was a fair question, but this didn't signal to the audience that he was engaging with the actual plan from Guillaume, and was criticized on twitter for that).
A few clarifying questions on the actual text:
Just two months into the project, we're already getting 1k+ views and listens per episode.
How many listens are you getting in the "views and listens"? Is it 1k+ views + listens in average? What is, say, the average watchtime per episode? What is the average view duration (in %)?
We're now at the point where we can convert money into faster audience growth using obvious low-hanging-fruit growth tactics.
What are these "obvious" low-hanging fruit growth tactics?
Plus, the topic of AI risk is growing increasingly mainstream. So I bet the potential audience who wants to consume my content about AI x-risk really is in the millions.
How did you end up with "in the millions" here?
The viewers-per-episode metric will soon hit tens of thousands
What data supports that? What do you mean by soon?
The most likely failure mode is if my content doesn't resonate with a large audience, making it hard to get beyond 25k+ views and listens per episode, such that I give up on the 250k+ views per episode which I currently think is a realistic long-term goal.
When will you decide whether you're in that failure mode or not? Why did you decide on 250k+ views episode as a long-term goal? What makes you think it is a "realistic long-term goal", and what do you mean by "long-term" here?
===
Final note: I think Liron is doing hard work in the space and I think this has the potential to be turned into something impactful. The previous questions were simply to get more clarity on some of the numbers that were mentioned before, and the potential pitfalls I've mentioned in my reservations. I think Liron has proven to have the consistency and patience to hold these debates regularly, and I overall encourage the project, assuming some of the reservations I have are addressed in the future.
Michaël Rubens Trazzi
about 1 year ago
Main updates since I started the project
- March: published popular video: "2024: the age of AGI"
- April: edited some Ethan Perez interview
- April-May: I record & edit my AI therapist series
- May-June (daily uploads): I published 23 videos, including 8 paper walkthroughs, 4 animated explainers and one collab (highlight). More below.
- August: I record & edit an interview with Owain Evans (will be published in the next few days).
Detailed breakdown of the 23 videos from my daily uploads:
paper walkthroughs
The Full Takeoff Model series (3 episodes)
Anthropic’s scaling monosemanticity
Safety Cases: How to Justify the Safety of Advanced AI Systems
sleeper agent series
walkthrough of the sleeper agent paper
walkthrough of the sleeper agent update
four-part series on sleeper agents
nathan labenz collab
one interview with nathan labenz from cognitive revolution (highlight)
full video includes a crosspost of nathan labenz’s episode with adam gleave
other alignment videos
discussion of previous coding projects related i’ve made about alignment
mental health advice when working on ai risk
take on leopold’s situational awareness
the AI therapist series (about AI's impact)
August: In the next week or so I will be publishing an interview with Owain Evans.
September-?: I'm flying to San Francisco in to meet people working full-time on AI Safety, so I can have a better sense of what research to cover next with more paper walkthroughs, and schedule some interviews there in person.
After that, my main focus will be editing the recorded in-person interviews, recording paper walkthroughs, video explainers, remote interviews, and possibly some more video adapatations of fiction (like this one).
Any additional funding that goes into this grant will directly go into reimbursing my travel expenses in the Bay, and allow me to stay longer and record more in-person interviews.
I am happy to get connected with people to talk to (for an interview, or because they wrote an important paper I should cover).
For | Date | Type | Amount |
---|---|---|---|
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 6 days ago | project donation | +1000 |
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 10 days ago | project donation | +800 |
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 14 days ago | project donation | +5000 |
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 16 days ago | project donation | +2000 |
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 16 days ago | project donation | +200 |
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 16 days ago | project donation | +2000 |
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 16 days ago | project donation | +8000 |
Grow An AI Safety Tiktok Channel To Reach Ten Million People | 16 days ago | project donation | +200 |
Finishing The SB-1047 Documentary | 6 months ago | project donation | +25 |
Finishing The SB-1047 Documentary | 6 months ago | project donation | +200 |
Manifund Bank | 7 months ago | withdraw | 51262 |
Finishing The SB-1047 Documentary | 7 months ago | project donation | +41162 |
Finishing The SB-1047 Documentary | 8 months ago | project donation | +10000 |
Finishing The SB-1047 Documentary | 8 months ago | project donation | +100 |
Manifund Bank | 9 months ago | withdraw | 53510 |
Finishing The SB-1047 Documentary | 9 months ago | project donation | +35 |
Finishing The SB-1047 Documentary | 9 months ago | project donation | +4000 |
Finishing The SB-1047 Documentary | 9 months ago | project donation | +3200 |
Finishing The SB-1047 Documentary | 9 months ago | project donation | +5000 |
<06e6a51a-b287-41af-92c8-73672aceed02> | 9 months ago | tip | +1 |
Making 52 AI Alignment Video Explainers and Podcasts | 9 months ago | project donation | +10 |
Finishing The SB-1047 Documentary | 9 months ago | project donation | +5000 |
Finishing The SB-1047 Documentary | 9 months ago | project donation | +10000 |
Manifund Bank | 10 months ago | withdraw | 15000 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +5000 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +20 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +20 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +100 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +100 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +4140 |
Manifund Bank | 10 months ago | withdraw | 6087 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +10000 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +6000 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +50 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +50 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +300 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +130 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +80 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +10 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +3000 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +5000 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +50 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +100 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +100 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +15 |
Finishing The SB-1047 Documentary | 10 months ago | project donation | +6000 |
Making 52 AI Alignment Video Explainers and Podcasts | 11 months ago | project donation | +4000 |
80,000 Hours | 11 months ago | project donation | 10 |
Doom Debates - Podcast & debate show to help AI x-risk discourse go mainstream | 12 months ago | project donation | 20 |
AI Governance YouTube Channel | 12 months ago | project donation | 40 |
PauseAI local communities - volunteer stipends | 12 months ago | project donation | 10 |
Making 52 AI Alignment Video Explainers and Podcasts | 12 months ago | project donation | +331 |
Making 52 AI Alignment Video Explainers and Podcasts | 12 months ago | project donation | 500 |
Making 52 AI Alignment Video Explainers and Podcasts | 12 months ago | project donation | 500 |
Making 52 AI Alignment Video Explainers and Podcasts | 12 months ago | project donation | 500 |
Making 52 AI Alignment Video Explainers and Podcasts | 12 months ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | 12 months ago | project donation | +11 |
Lightcone Infrastructure | 12 months ago | project donation | 10 |
MATS Program | 12 months ago | project donation | 10 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 year ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 year ago | project donation | +50 |
Manifund Bank | about 1 year ago | deposit | +600 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 year ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 year ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 year ago | project donation | +385 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 year ago | project donation | +600 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 year ago | project donation | +10 |
Manifund Bank | over 1 year ago | withdraw | 150 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +150 |
Manifund Bank | over 1 year ago | withdraw | 8022 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +5000 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +500 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +10 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +500 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +1942 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +20 |
Making 52 AI Alignment Video Explainers and Podcasts | over 1 year ago | project donation | +50 |