Project summary
We are currently in a Governance Deadlock. Recent 2026 developments, specifically the US supply-chain ban on Anthropic and the concentration of proprietary AI deals for critical infrastructure, prove that voluntary safety agreements are insufficient. We are building a global Trust Monoculture. In this environment, a single point of failure in a black-box model could trigger a catastrophic collapse of energy, trade, and defense systems.
The Verified Loop is an emergency intervention designed to move trust away from "Corporate Handshakes" and into Immutable Technical Proofs. Our protocol operationalises the transparency mandates of the EU AI Act (Article 50) by using Zero-Knowledge Proofs (ZKPs) and NIR Spectroscopy. This enables safety without requiring developers to expose proprietary weights or governments to "just trust" a black-box.
What are this project's goals? How will you achieve them?
The goal is to deploy a three-layer defense-in-depth architecture that anchors AI safety in physical reality:
1. Macro (The Decentralized ZKP Registry)
We utilize Zero-Knowledge Proofs to enable Verification without Revelation. This allows national regulators to verify that an AI model meets specific safety benchmarks (e.g., alignment with EU AI Act Article 50 Transparency Obligations and the NIST AI Risk Management Framework) without the developer having to expose proprietary model weights.
2. Meso (Physical Truth Anchors)
This layer prevents Physical Hallucinations in critical infrastructure. Using Near-Infrared (NIR) Spectroscopy and Forensic Ink, we create unique chemical signatures for physical assets. The AI’s Digital Twin must match the physical sensor’s molecular reading, secured via AES-128 M2M protocols, to prevent unauthorized or hallucinated autonomous actions in industrial environments.
3. Micro (Stealth Verification)
The protocol uses Decentralized Identifiers (DIDs) to solve the Treacherous Turn problem (where an AI acts safe only when it is aware of an audit). DIDs enable continuous, non-custodial auditing at the edge, ensuring the AI remains aligned even when it is not being "observed" by a central authority.
How will this funding be used?
Funding Utilization (Seed Round: $85,000)
This capital is designated solely for the technical development and global validation of the protocol.
Technical Lead & Research ($50,000): Covers the engineering and research effort required for the 24-month development cycle. This includes ZKP registry design and multidisciplinary coordination across our partner hubs in the UK, Spain, and Germany.
Prototyping and Hardware ($20,000): Procurement of high-precision NIR sensors, forensic chemical reagents, and ZKP compute resources (GPUs/FPGAs) to scale cryptographic proofs.
Global Validation and AIS Coordination ($15,000): Technical coordination with AI Safety Institutes (AISI) and field testing within the UK, Spain (AESIA Sandbox), and Germany (BSI Standards) to ensure international industrial interoperability.
Who is on your team? What's your track record on similar projects?
I am a Venture Builder and Systems Architect with a Rank 1 background in Economics and Infrastructure. I am a Conditional Offer holder at the University of Glasgow for a multidisciplinary program bridging technical safety and global governance. My track record includes:
Empowerment Edge: Founded high-fidelity digital platforms to solve systemic local challenges.
Afryvo Analytics: Led the integration of AI-driven data intelligence ecosystems for measurable impact.
NSTP NUST Fellowship: Vetted by Pakistan's premier innovation hub for high-stakes technical execution.
National Recognition: Recipient of the Prime Minister innovation Award for technology.
I am anchoring technical development at the University of Glasgow to align with the ARIA Scaling Trust Track 3.2. My strategy includes stress-testing the Verified Loop against EU AI Act Article 50 mandates via (Spain's AESIA) and German BSI industrial standards
What are the most likely causes and outcomes if this project fails?
We identify three primary technical and strategic bottlenecks:
Compute Latency: ZKP generation for frontier-scale models may currently be too slow for real-time infrastructure response.
Hardware Fragility: NIR sensors may require frequent recalibration in harsh industrial environments.
Adoption Friction: Frontier labs may view even ZKP-based audits as a side-channel attack risk for their proprietary weights.
Outcome of Failure: If global adoption is not achieved, the project still produces an interoperable Sovereign Template for the Global Majority. By open-sourcing our molecular signatures and ZKP schemas, we empower emerging economies to implement independent verification locally. This prevents a Safety Monopoly and ensures that a lack of global consensus does not leave infrastructure vulnerable.
How much money have you raised in the last 12 months, and from where?
I have been supported by national fellowships (NSTP NUST) and recognized via the Prime Minister Award. This $85,000 request is the primary seed capital to bridge these multidisciplinary successes into the UK and EU AI Safety corridors.