Project summary
While global AI safety research remains siloed in Western laboratories, a critical "Existential Leak" is forming in emerging markets. Rapid, unmonitored deployment of frontier Agentic AI—systems that move beyond chat to autonomous business decision-making is occurring in regions with fragile regulatory oversight. Nigeria’s high-growth MSME sector represents the world’s most significant "live lab" for these deployment-side risks.
This project is a Field-Led Policy Inquiry. We are using an established partnership with PLASMIDA (Plateau State Microfinance Development Agency) to conduct a 6-month "Stress-Test" on AI-driven Business Intelligence (BI). By delivering high-level BI training to 100+ business owners, we are not just building capacity; we are extracting ground-truth data on algorithmic failure modes, data-sovereignty leaks, and loss of human agency. This project culminates in a Global Policy Blueprint designed to inform the UN’s Global Dialogue on AI Governance and national regulators like NITDA, ensuring safety standards are interoperable across diverse economic contexts.
What are this project's goals? How will you achieve them?
Goal 1: Empirical Risk Mapping (Fieldwork). Conduct a series of "Action Research" workshops with PLASMIDA-affiliated MSMEs. We will use red-teaming exercises and "Shadow AI" audits to identify how local business owners unintentionally bypass safety guardrails when using AI for strategic decisions.
Goal 2: The Interoperability Blueprint. Translate field findings into a 10,000-word research paper and a 5-page "Executive Policy Brief." This document will specifically address the "Deployment Gap", the disconnect between Western safety theory and Global South implementation.
Goal 3: International Policy Advocacy. Submit findings and recommendations to the OECD AI Policy Observatory and the African Union AI Task Force to catalyze context-sensitive regulatory standards.
How will this funding be used?
The $8,000 enables a transition from a commercial training model to an independent research mission:
Research Buy-out (6 months): $5,000 (Covers lead researcher’s time for curriculum design, data analysis, and academic writing).
Field Data Logistics (Jos/Lagos): $1,500 (Partnership activation with PLASMIDA/LCCI, participant data stipends, and workshop security).
Global Dissemination: $1,500 (Open-access publication fees in high-impact journals and registration/travel for one major 2026 policy forum to present the Blueprint).
Who is on your team? What's your track record on similar projects?
I am the lead researcher and founder of Linnexus AI Institute. My track record includes the successful deployment of Generative AI training for over 100 professionals in collaboration with PLASMIDA. This existing relationship is our "unfair advantage," allowing us to bypass the access barriers that stymie most Global South research. I am also an independent fellow at the University of Ibadan, bridging the gap between state-level implementation and academic rigor
What are the most likely causes and outcomes if this project fails?
Cause: Data "Noisiness": Participants may provide inconsistent feedback.
Mitigation: We are using technical logging (with consent) during the BI workshops to track how participants prompt and interact with AI models in real-time.
Outcome if project fails: The "Existential Leak" continues. Global standards remain Western-centric, leaving the Global South as an unregulated testing ground for advanced AI, which could lead to systemic economic shocks or irreversible data breaches.
How much money have you raised in the last 12 months, and from where?
$0. This is an independent transition from commercial AI training to safety-focused research.