EffiSciences promotes and teaches AI safety in France.
We organize AIS bootcamps (ML4Good), teach accredited courses, organize hackathons and talks in France’s top research universities. We also support the replication of these bootcamps abroad (so far twice in Switzerland and once in Germany, organized mostly by ML4G alumni).
We also teach accredited courses and organize conferences on biorisks.
We've reached 700 students in a bit more than one year, 30 of whom are already orienting their careers into AIS research or field building.
>> More info on our recent LessWrong post.
What are this project's goals and how will you achieve them?
We want to help promising researchers work on AI and bio x-risks, and raise public awareness on these issues.
France has among the best researchers in the world (especially in math and CS) but didn't have any AI safety pipeline until EffiSciences started creating one. As the only field-building org in France focused on these topics, and thanks to good partnerships with top universities, we are well-placed to have a strong impact in the coming years.
One goal we are currently working on is to leverage our position to influence the French research and policy arena (although with extreme care when it comes to policy).
How will this funding be used?
Extend our runway by up to 12 additional months (current funds will last until end of April 2024)
Fund promising projects, such as:
Write content to promote and support AI safety in France
Offer research and internship opportunities for talented researchers to work on safety
International bootcamps: keep helping groups in other countries launch their own ML for Good bootcamps
Create bridges between the AI safety and ethics communities, to promote a healthier discourse than what we observe in other countries
Who is on your team and what's your track record on similar projects?
Charbel-Raphaël Segerie, head of our AI safety unit, who has been leading most of our AI projects since our inception two years ago.
Diane Letourneur, head of our biorisk unit, who teaches a biorisk seminar at ENS Paris.
Florent Berthet, director, who previously co-founded EA France.
20+ volunteers, who work regularly on various activities and projects. See our cases studies for more info.
What are the most likely causes and outcomes if this project fails? (premortem)
Failing to attract people: If AI safety (or our org) gets a bad rap in France, we could have a harder time attracting talented people into our pipeline. To mitigate that we aim to be proactive and foster more dialog with actors on the different ends of the AI discourse spectrum.
Lack of opportunities: Getting people interested and upskilling them is one thing, but to have an impact we need them to be able to produce valuable work. Today, besides a handful of AIS labs, there is a lack of good opportunities to do useful research. A failure mode would be to train people and have them struggle to find internships or jobs afterwards. This is why we would like to offer research positions in-house, or in partnership with ENS Paris.
What other funding are you or your project getting?
Our most recent funding has come from Open Philanthropy and Lightspeed grants.