Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
6

EffiSciences: AI safety field-building in France

Florent-Berthet avatar

Florent Berthet

Not fundedGrant
$0raised

Project summary

EffiSciences promotes and teaches AI safety in France.

We organize AIS bootcamps (ML4Good), teach accredited courses, organize hackathons and talks in France’s top research universities. We also support the replication of these bootcamps abroad (so far twice in Switzerland and once in Germany, organized mostly by ML4G alumni).

We also teach accredited courses and organize conferences on biorisks.

We've reached 700 students in a bit more than one year, 30 of whom are already orienting their careers into AIS research or field building.

>> More info on our recent LessWrong post.

What are this project's goals and how will you achieve them?

We want to help promising researchers work on AI and bio x-risks, and raise public awareness on these issues.

France has among the best researchers in the world (especially in math and CS) but didn't have any AI safety pipeline until EffiSciences started creating one. As the only field-building org in France focused on these topics, and thanks to good partnerships with top universities, we are well-placed to have a strong impact in the coming years.

One goal we are currently working on is to leverage our position to influence the French research and policy arena (although with extreme care when it comes to policy).

How will this funding be used?

  • Extend our runway by up to 12 additional months (current funds will last until end of April 2024)

  • Fund promising projects, such as:

    • Write content to promote and support AI safety in France

    • Offer research and internship opportunities for talented researchers to work on safety

    • International bootcamps: keep helping groups in other countries launch their own ML for Good bootcamps

    • Create bridges between the AI safety and ethics communities, to promote a healthier discourse than what we observe in other countries

Who is on your team and what's your track record on similar projects?

  • Charbel-Raphaël Segerie, head of our AI safety unit, who has been leading most of our AI projects since our inception two years ago.

  • Diane Letourneur, head of our biorisk unit, who teaches a biorisk seminar at ENS Paris.

  • Florent Berthet, director, who previously co-founded EA France.

  • 20+ volunteers, who work regularly on various activities and projects. See our cases studies for more info.

What are the most likely causes and outcomes if this project fails? (premortem)

  • Failing to attract people: If AI safety (or our org) gets a bad rap in France, we could have a harder time attracting talented people into our pipeline. To mitigate that we aim to be proactive and foster more dialog with actors on the different ends of the AI discourse spectrum.

  • Lack of opportunities: Getting people interested and upskilling them is one thing, but to have an impact we need them to be able to produce valuable work. Today, besides a handful of AIS labs, there is a lack of good opportunities to do useful research. A failure mode would be to train people and have them struggle to find internships or jobs afterwards. This is why we would like to offer research positions in-house, or in partnership with ENS Paris.

What other funding are you or your project getting?

Our most recent funding has come from Open Philanthropy and Lightspeed grants.

Comments1Similar7
Florent-Berthet avatar

Florent Berthet

CeSIA

French center for AI safety

Technical AI safetyAI governanceGlobal catastrophic risks
6
5
$0 raised
🦑

Jonathan Claybrough

Biosecurity bootcamp by EffiSciences

5 day bootcamp upskilling participants on biosecurity, to enable and empower career change towards reducing biorisks, from ML4Good organisers

BiosecurityGlobal catastrophic risks
2
1
$1.3K raised
Gautier-DUcurtil avatar

Gautier Ducurtil

Launching an AI safety org at a top French coding school + funding my studies

I need to focus myself on my studies and on creating AI Safety projects without having to take a dead-end job to fund them.

Science & technologyTechnical AI safetyAI governance
1
0
$0 raised
Lucie avatar

Lucie Philippon

Hiring AI Policy collaborators to prepare France AI Action Summit

AI governanceLong-Term Future FundGlobal catastrophic risks
3
2
$0 raised
CeSIA avatar

Centre pour la Sécurité de l'IA

Scaling AI safety awareness via content creators

4M+ views on AI safety: Help us replicate and scale this success with more creators

Technical AI safetyAI governanceGlobal catastrophic risks
11
10
$21.3K raised
Apart avatar

Apart Research

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline

Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors

Technical AI safetyAI governanceEA community
30
36
$131K raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
1
$100K raised