Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
6

CeSIA

Technical AI safetyAI governanceGlobal catastrophic risks
Florent-Berthet avatar

Florent Berthet

Not fundedGrant
$0raised

Project summary

A center for AI safety in Paris, established in May 2024 by EffiSciences, aiming to promote a responsible development of AI. The center will focus on:

  1. Advocacy: raise awareness about AI safety.

  2. R&D: conduct technical projects in partnership with organizations responsible for implementing the EU AI Act, as well as other key players within the ecosystem.

  3. Field-building: train researchers and engineers, and support policymakers.

What are this project's goals and how will you achieve them?

Our main mission is to shape public and policy discussions around AI safety by engaging with the ecosystem in several complementary ways:

  • Policy outreach and support: engaging with key policymakers, sharing insights, writing policy briefs, and building collaborations with relevant institutions.

  • Public awareness: writing op-eds and articles, organizing events, and giving interviews.

  • R&D: engaging in technical projects, such as developing an open-source platform for benchmarks in partnership with startups and public institutions.

  • Education and field-building: nurturing our AI safety talent pipeline through university courses, bootcamps, our textbook, online programs, and mentoring.

How will this funding be used?

Budget for 12 months of operations:

  • $212k: Salaries for 6 FTEs (5 full-time staff and 2 part-time):

    • Executive director (already 50% funded)

    • Head of research (already funded)

    • Head of policy (already funded)

    • Head of operations

    • Head of strategy

    • Scientific director (part-time)

    • Media and communications expert (part-time)

  • $74k: Programs

    • $50k: R&D grants and internships

    • $24k: Talks, round tables and workshops

  • $52k: General expenses

    • Offices

    • Subscriptions, equipment, transport, compute

Who is on your team and what's your track record on similar projects?

  • Charbel-Raphaël Ségerie: Executive director. Charbel has been coordinating most of EffiSciences’ AI activities, teaches an accredited AI safety programs in the top French research university, kickstarted and facilitates the ML4Good bootcamps, and creates content such as articles and an AI safety textbook. (LessWrong profile)

  • Alexandre Variengien: Head of research. Alexandre is an independent researcher who has previously interned at Redwood Research as research manager for the REMIX program, and has done his master’s thesis at Conjecture. He was second author on the Circuit for Indirect Object Identification paper (LessWrong profile)

  • Florent Berthet: Head of operations. Florent is currently EffiSciences’ executive director, and previously co-founded and ran EA France.

  • Manuel Bimich: Head of strategy. Manuel has been involved with EffiSciences' AI division since its early days.

  • Vincent Corruble: Scientific director. Vincent is Associate Professor at Sorbonne University and is a regular visiting researcher at CHAI.

Track record:

We have been doing AI safety field-building in France for two years with good success, reaching 1,000+ students and orienting more than 30 people towards AI safety careers. Our ML4Good bootcamps have now been replicated in several countries, and our textbook is already being used by several groups. You can find more detail on our LessWrong post from last year.

We have recently started building collaborations with multiple organisations to develop tools that might eventually be used to implement the AI Act. These organizations have shown strong interest in our work, and collaborating with them will help us gain us credibility among key private and public stakeholders.

While public advocacy was not a priority for us previously, it will be one of our core activities moving forward. We are rapidly acquiring experience in this area, and have already begun establishing partnerships with leading AI journalists in France. For example, we recently published an op-ed supported by Yoshua Bengio in a major French newspaper.

What are the most likely causes and outcomes if this project fails? (premortem)


Likely causes:

  • Insufficient engagement or resistance from key AI actors and policymakers due to ideological differences or bad economic incentives.

  • Inability to secure adequate funding and talent, which is essential to reach a critical mass that would, in turn, attract additional resources and skilled people. Being able to attract people with sufficient experience is especially important for our policy-focused work, but it is challenging to find candidates who are both deeply knowledgeable about the subject and well-connected within the policy ecosystem.

Potential outcomes:

  • Limited impact on shaping public and policy discourse on AI safety, potentially resulting in France adopting positions that undermine international coordination efforts.

  • Polarizing the public discourse. The fields of AI ethics and AI safety are somewhat divided, and we are seeing sparks of this happening in France. By inviting experts from different AI fields and with different beliefs to discuss (e.g. during round tables and panels, as we are currently doing), we aim to promote a healthier debate and foster positive relationships between AI actors in France.

What other funding are you or your project getting?

  • We have already raised $150k for this project. To see how we will use that budget, check the "already funded" mentions in the "How will this funding be used?" section.

Comments5Similar8
Florent-Berthet avatar

Florent Berthet

EffiSciences: AI safety field-building in France

Funding to extend our runway and fund promising projects

6
1
$0 raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
1
$100K raised
Thomas-Larsen avatar

Thomas Larsen

General Support for the Center for AI Policy

Help us fund 2-3 new employees to support our team

AI governance
9
5
$0 raised
CeSIA avatar

Centre pour la Sécurité de l'IA

Scaling AI safety awareness via content creators

4M+ views on AI safety: Help us replicate and scale this success with more creators

Technical AI safetyAI governanceGlobal catastrophic risks
11
10
$21.3K raised
Lucie avatar

Lucie Philippon

Hiring AI Policy collaborators to prepare France AI Action Summit

AI governanceLong-Term Future FundGlobal catastrophic risks
3
2
$0 raised
cais avatar

Center for AI Safety

AI Safety & Society

High-quality, timely articles on AI safety

2
1
$250K raised
Gautier-DUcurtil avatar

Gautier Ducurtil

Launching an AI safety org at a top French coding school + funding my studies

I need to focus myself on my studies and on creating AI Safety projects without having to take a dead-end job to fund them.

Science & technologyTechnical AI safetyAI governance
1
0
$0 raised
caip avatar

Center for AI Policy

Support CAIP’s Advocacy Work in 2025

Advocating for U.S. federal AI safety legislation to reduce catastrophic AI risk.

AI governanceGlobal catastrophic risks
4
3
$0 raised