Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Mitigating Systemic Risks of Unchecked AI Deployment

AI governanceGlobal catastrophic risksGlobal health & development
Lhordz avatar

Feranmi Williams

ProposalGrant
Closes February 6th, 2026
$0raised
$6,000minimum funding
$8,000funding goal

Offer to donate

29 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

While global AI safety research remains siloed in Western laboratories, a critical "Existential Leak" is forming in emerging markets. Rapid, unmonitored deployment of frontier Agentic AI—systems that move beyond chat to autonomous business decision-making is occurring in regions with fragile regulatory oversight. Nigeria’s high-growth MSME sector represents the world’s most significant "live lab" for these deployment-side risks.

This project is a Field-Led Policy Inquiry. We are using an established partnership with PLASMIDA (Plateau State Microfinance Development Agency) to conduct a 6-month "Stress-Test" on AI-driven Business Intelligence (BI). By delivering high-level BI training to 100+ business owners, we are not just building capacity; we are extracting ground-truth data on algorithmic failure modes, data-sovereignty leaks, and loss of human agency. This project culminates in a Global Policy Blueprint designed to inform the UN’s Global Dialogue on AI Governance and national regulators like NITDA, ensuring safety standards are interoperable across diverse economic contexts.

What are this project's goals? How will you achieve them?

Goal 1: Empirical Risk Mapping (Fieldwork). Conduct a series of "Action Research" workshops with PLASMIDA-affiliated MSMEs. We will use red-teaming exercises and "Shadow AI" audits to identify how local business owners unintentionally bypass safety guardrails when using AI for strategic decisions.

Goal 2: The Interoperability Blueprint. Translate field findings into a 10,000-word research paper and a 5-page "Executive Policy Brief." This document will specifically address the "Deployment Gap", the disconnect between Western safety theory and Global South implementation.

Goal 3: International Policy Advocacy. Submit findings and recommendations to the OECD AI Policy Observatory and the African Union AI Task Force to catalyze context-sensitive regulatory standards.

How will this funding be used?

The $8,000 enables a transition from a commercial training model to an independent research mission:

Research Buy-out (6 months): $5,000 (Covers lead researcher’s time for curriculum design, data analysis, and academic writing).

Field Data Logistics (Jos/Lagos): $1,500 (Partnership activation with PLASMIDA/LCCI, participant data stipends, and workshop security).

Global Dissemination: $1,500 (Open-access publication fees in high-impact journals and registration/travel for one major 2026 policy forum to present the Blueprint).

Who is on your team? What's your track record on similar projects?

I am the lead researcher and founder of Linnexus AI Institute. My track record includes the successful deployment of Generative AI training for over 100 professionals in collaboration with PLASMIDA. This existing relationship is our "unfair advantage," allowing us to bypass the access barriers that stymie most Global South research. I am also an independent fellow at the University of Ibadan, bridging the gap between state-level implementation and academic rigor

What are the most likely causes and outcomes if this project fails?

Cause: Data "Noisiness": Participants may provide inconsistent feedback.

Mitigation: We are using technical logging (with consent) during the BI workshops to track how participants prompt and interact with AI models in real-time.

Outcome if project fails: The "Existential Leak" continues. Global standards remain Western-centric, leaving the Global South as an unregulated testing ground for advanced AI, which could lead to systemic economic shocks or irreversible data breaches.

How much money have you raised in the last 12 months, and from where?

$0. This is an independent transition from commercial AI training to safety-focused research.

Comments2OffersSimilar4
majorjanyau avatar

Muhammad Ahmad

Building Frontier AI Governance Capacity in Africa (Pilot Phase)

A pilot to build policy and technical capacity for governing high-risk AI systems in Africa

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $50K
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $23.5K
AI-Safety-Nigeria avatar

AI Safety Nigeria

Learn & Launch: Monthly Training and Mentorship for Early-Stage AI Safety & Gov.

A low-cost, high-leverage capacity-building program for early-career AI safety and governance practitioners

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $2.5K
Rubies93 avatar

Agwu Naomi Nneoma

Nneoma Agwu — Building a Career in AI Governance to Reduce Long-Term Risks.

Funding a Master's in AI, Ethics & Society to transition into AI governance and long-term risk mitigation, and safety-focused policy development.

Technical AI safetyAI governanceEA communityGlobal catastrophic risksGlobal health & development
1
0
$0 raised