Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
charbel-raphael avatarcharbel-raphael avatar
Charbel-Raphael Segerie

@charbel-raphael

Executive Director @CeSIA

https://crsegerie.github.io/
$7total balance
$7charity balance
$0cash balance

$0 in pending offers

Projects

Investigating constructability as a safer approach to machine-learning
AI Safety Textbook

Outgoing donations

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
$10
6 months ago

Comments

Grow An AI Safety Tiktok Channel To Reach Ten Million People
charbel-raphael avatar

Charbel-Raphael Segerie

about 19 hours ago

Congrats on getting the grant/ those numbers!

I wonder if it makes sense to translate those clips into a few languages automatically?

Tarbell Center for AI Journalism
charbel-raphael avatar

Charbel-Raphael Segerie

about 19 hours ago

I personally think that this project deserved more. Those numbers are quite impressive, and the Transformer newsletter is a critical epistemic infrastructure for the community.

Runway till January: Amplify's funding ask to market EA & AI Safety 
charbel-raphael avatar

Charbel-Raphael Segerie

about 20 hours ago

I think the numbers highlighted are pretty impressive (~$570 per GWWC pledge and ~$1700 per MARS fellow), and I think marketing is important and would benefit many organisations.

From what I see, GWWC estimates the average 10% pledger donates $100,000 over the course of their pledge, and they secured 2 pledges from the EA Netherlands campaign alone—so this is already worth it for this. I would give them money.

Project B2-Boop: Gamifying Instrumental Convergence & Reward Hacking
charbel-raphael avatar

Charbel-Raphael Segerie

about 21 hours ago

Hi, interesting proposal. I've downvoted because I think this awareness strategy is dominated by alternatives like producing short YouTube/TikTok videos explaining the same concepts, which could reach comparable audiences at a fraction of the cost. Here is an analysis of our YouTube strategy for example:  https://manifund.org/projects/scaling-ai-safety-awareness-via-content-creators

Formal Certification Technologies for AI Safety
charbel-raphael avatar

Charbel-Raphael Segerie

about 21 hours ago

Congrats on getting the grant!

I'd be curious which papers or agenda you see as closest to your theoretical framework/broader theory of impact. The project summary mentions preliminary results at AAAI, NeurIPS, CAV—could you link to those?

Progress Studies YouTube Channel
charbel-raphael avatar

Charbel-Raphael Segerie

about 21 hours ago

Congrats on getting the grant. Progress is cool. We need more progress.

I'm sure you have already heard of this, but AI will be the source of most of the progress over the coming years, so curious how you'll approach AI's role in progress, if at all.

Also, it might be interesting to test a strategy for 15 days that involves reaching out to and collaborating with big YouTubers to drive growth. This is what CeSIA did, and we got multiple videos with more than 1M views like this without even being YouTubers ourselves (we just acted as consultants). See a write-up of the strategy here: https://manifund.org/projects/scaling-ai-safety-awareness-via-content-creators

AI Security Startup Accelerator Batch #2
charbel-raphael avatar

Charbel-Raphael Segerie

about 21 hours ago

In our first SF batch, we worked with companies like Andon Labs, Lucid Computing, and Workshop Labs, --> What was the counterfactual impact?

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
charbel-raphael avatar

Charbel-Raphael Segerie

6 months ago

Apart has been useful for me to quickly experiment with ideas/improve on quick iteration. I've organised multiple hackathons before knowing apart, and their format is vastly more effective at converting talent/by unit of effort. While I was head of EffiSciences’ AI Safety Unit, this was one of my favorite event formats, and this is one of the format that I encourage alumni of ML4Good to run. Empirically, each apart hackathon that I organized in Paris enabled the long term careers of 0.6 person in AI Safety (see the table). This means that, on average, 0.6 new full-time persons started working on AI safety after each Apart hackathon event in Paris.

AI Safety Textbook
charbel-raphael avatar

Charbel-Raphael Segerie

9 months ago

We posted an update here: https://manifund.org/projects/ai-safety-atlas

AI Safety Textbook
charbel-raphael avatar

Charbel-Raphael Segerie

about 1 year ago

@Markov And the book has a much better name now: "The AI Safety Atlas" !

Transactions

ForDateTypeAmount
Keep Apart Research Going: Global AI Safety Research & Talent Pipeline6 months agoproject donation10
<1a6263a6-e893-4717-a364-a021f6089e4d>6 months agotip1
<584b961e-134c-47aa-895f-350d8524f14c>6 months agotip1
<97dd5197-7c7a-4c7f-a533-b74f3f20f95a>6 months agotip1
Manifund Bank6 months agodeposit+20
Manifund Bankover 1 year agowithdraw41070
AI Safety Textbookover 1 year agoproject donation+60
AI Safety Textbookover 1 year agoproject donation+39000
AI Safety Textbookover 1 year agoproject donation+10
AI Safety Textbookover 1 year agoproject donation+2000