Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Mitigating Systemic Risks of Unchecked AI Deployment

AI governanceGlobal catastrophic risksGlobal health & development
Lhordz avatar

Feranmi Williams

ProposalGrant
Closes February 6th, 2026
$0raised
$6,000minimum funding
$8,000funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

While global AI safety research remains siloed in Western laboratories, a critical "Existential Leak" is forming in emerging markets. Rapid, unmonitored deployment of frontier Agentic AI—systems that move beyond chat to autonomous business decision-making is occurring in regions with fragile regulatory oversight. Nigeria’s high-growth MSME sector represents the world’s most significant "live lab" for these deployment-side risks.

This project is a Field-Led Policy Inquiry. We are using an established partnership with PLASMIDA (Plateau State Microfinance Development Agency) to conduct a 6-month "Stress-Test" on AI-driven Business Intelligence (BI). By delivering high-level BI training to 100+ business owners, we are not just building capacity; we are extracting ground-truth data on algorithmic failure modes, data-sovereignty leaks, and loss of human agency. This project culminates in a Global Policy Blueprint designed to inform the UN’s Global Dialogue on AI Governance and national regulators like NITDA, ensuring safety standards are interoperable across diverse economic contexts.

What are this project's goals? How will you achieve them?

Goal 1: Empirical Risk Mapping (Fieldwork). Conduct a series of "Action Research" workshops with PLASMIDA-affiliated MSMEs. We will use red-teaming exercises and "Shadow AI" audits to identify how local business owners unintentionally bypass safety guardrails when using AI for strategic decisions.

Goal 2: The Interoperability Blueprint. Translate field findings into a 10,000-word research paper and a 5-page "Executive Policy Brief." This document will specifically address the "Deployment Gap", the disconnect between Western safety theory and Global South implementation.

Goal 3: International Policy Advocacy. Submit findings and recommendations to the OECD AI Policy Observatory and the African Union AI Task Force to catalyze context-sensitive regulatory standards.

How will this funding be used?

The $8,000 enables a transition from a commercial training model to an independent research mission:

Research Buy-out (6 months): $5,000 (Covers lead researcher’s time for curriculum design, data analysis, and academic writing).

Field Data Logistics (Jos/Lagos): $1,500 (Partnership activation with PLASMIDA/LCCI, participant data stipends, and workshop security).

Global Dissemination: $1,500 (Open-access publication fees in high-impact journals and registration/travel for one major 2026 policy forum to present the Blueprint).

Who is on your team? What's your track record on similar projects?

I am the lead researcher and founder of Linnexus AI Institute. My track record includes the successful deployment of Generative AI training for over 100 professionals in collaboration with PLASMIDA. This existing relationship is our "unfair advantage," allowing us to bypass the access barriers that stymie most Global South research. I am also an independent fellow at the University of Ibadan, bridging the gap between state-level implementation and academic rigor

What are the most likely causes and outcomes if this project fails?

Cause: Data "Noisiness": Participants may provide inconsistent feedback.

Mitigation: We are using technical logging (with consent) during the BI workshops to track how participants prompt and interact with AI models in real-time.

Outcome if project fails: The "Existential Leak" continues. Global standards remain Western-centric, leaving the Global South as an unregulated testing ground for advanced AI, which could lead to systemic economic shocks or irreversible data breaches.

How much money have you raised in the last 12 months, and from where?

$0. This is an independent transition from commercial AI training to safety-focused research.

Comments2OffersSimilar5
Lhordz avatar

Feranmi Williams

about 8 hours ago

For regrantors looking for evidence of our fieldwork capacity: In August 2025, I single-handedly led a strategic partnership with PLASMIDA (Plateau State Microfinance Development Agency) to train over 100 professionals and youth on Generative AI.

This wasn't just a basic workshop; we engaged a high-level cohort of lawyers, engineers, and entrepreneurs in Jos. You can see the scale of that engagement and the formal partnership in action here:

https://www.linkedin.com/posts/feranmi-williams-6417212b6_cheers-to-more-years-of-strategic-partnership-activity-7364691134184316931-UGXQ?utm_source=share&utm_medium=member_android&rcm=ACoAAEvs4FEBzilQ5lrr7ryL8GkiNPsvg_uUcYk

The $8,000 research project I am proposing now leverages this exact high-trust network. We are returning to these 100+ professionals to conduct Action Research, monitoring how their deployment of AI has evolved and identifying the 'Existential Leaks' that global policy currently overlooks."

Lhordz avatar

Feranmi Williams

about 8 hours ago

To provide context on my ability to execute this research: In August 2025, I personally partnered with PLASMIDA to design and lead a comprehensive Generative AI training program. I singlehandedly trained over 100 professionals and youth in Plateau State, building a high-trust network that is already actively using these tools.

Here is the link to the project:

https://www.linkedin.com/posts/feranmi-williams-6417212b6_cheers-to-more-years-of-strategic-partnership-activity-7364691134184316931-UGXQ?utm_source=share&utm_medium=member_android&rcm=ACoAAEvs4FEBzilQ5lrr7ryL8GkiNPsvg_uUcYk