Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

6 Month Stipend to Support a Transition to AI Governance Work

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
NicoleMutunga avatar

Nicole Mutung'a

ProposalGrant
Closes December 19th, 2025
$0raised
$500minimum funding
$7,500funding goal

Offer to donate

28 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

I am seeking six-month career transition support to investigate the growing competition between in the development of advanced artificial intelligence systems, and assesses whether this competition could trigger an “AI race to the bottom” with significant implications for global AI safety.
I have the following in-progress proposal that outlines the broad direction <https://1drv.ms/w/c/2636267dae1bc663/EZ50UNOxV0FCiINKmkzDJJ0B7C7MoH1gfgG5aHgNfh0Hxw> . The feedback I have received on the same is largely positive.

What are this project's goals? How will you achieve them?

The project's goals are as follows:

  1. Producing a high-quality academic paper for the above proposal.
    This will be done through using my existing network of AI researchers to obtain feedback on the progress of the project.

  2. Completing a virtual AI public policy course offered by the London School of Economics. https://www.lse.ac.uk/study-at-lse/executive-education/programmes/ai-law-policy-and-governance#programmeContent

  3. Applying for roles in the AI governance space.

How will this funding be used?

The funding shall be used to provide a reasonable stipend of USD 5,500 for a period spanning six months. Additionally, USD 2,000 for the public policy short course.

Who is on your team? What's your track record on similar projects?

I have a background as a qualified corporate commercial lawyer. I have worked within the technology department for a law firm for a few years and as such have a good understanding of both national and regional technology initiatives. Previously, I have also worked as a research fellow at the Ilina Fellowship which focuses on producing AI safety academic work.

I have also worked as a community builder for the Effective Altruism community in Nairobi for the past few years. As such, I have led discussions around AI governance for a few years including to laypeople as part of an introductory fellowship.

Transitioning to full-time AI governance work will significantly increase my contribution to reducing catastrophic AI risks.

What are the most likely causes and outcomes if this project fails?

Risk: Research not reaching decision-makers
- Mitigation: Publish in accessible formats, partner with think tanks/universities, present findings in regional policy spaces.

Risk: Burnout or isolation in independent research
- Mitigation: Attend workshops and maintain structured monthly accountability. I also have connections with some AI governance researchers to provide feedback to my research

How much money have you raised in the last 12 months, and from where?

I have raised USD 0 in the last 12 months. If this grant (or any other grant) meets my funding needs I'll withdraw to avoid double funding.

CommentsOffersSimilar5

No comments yet. Sign in to create one!