You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Tessorium is an open-source (MIT licensed) trust protocol for AI agents. As AI agents act autonomously — executing transactions, accessing databases, making decisions — there is no open standard for verifying their identity, evaluating their trustworthiness, or enforcing policy before they act. Tessorium fills this gap with five core components: Ed25519 cryptographic identity (DIDs), dynamic multi-signal trust scoring, policy negotiation and enforcement, an emergency kill switch (<50ms), and tamper-evident Merkle tree audit trails.
The EU AI Act Article 14 mandates human oversight for high-risk AI systems, enforceable August 2, 2026 — 6 months away — with penalties up to EUR 35M or 7% of global revenue. No existing tool maps to these requirements at the protocol level. Tessorium does, natively.
Live product: https://tessera-sage.vercel.app/protocol/specification Protocol demo: https://tessera-sage.vercel.app/demo Console: https://tessera-sage.vercel.app/console
Months 1–2: Open-source core protocol components and publish the TTP specification as a public standard. Submit to W3C AI Agent Protocol Community Group and DIF Trusted AI Agents Working Group.
Months 2–4: Harden the protocol for production — external security audit of cryptographic components, performance benchmarking, and enterprise integration documentation.
Months 3–6: Secure 3 design partners (mid-market fintech/healthtech deploying AI agents) to validate the protocol in real enterprise environments before the EU AI Act August 2026 deadline.
Month 6+: Submit IETF Internet-Draft for agent trust requirements, informed by production learnings from design partners.
External security audit of critical cryptographic components — Ed25519 identity verification module and Merkle tree audit system — before open-sourcing
Infrastructure costs (hosting, CI/CD, testing environments) for 6 months of production hardening
Protocol specification formatting, documentation, and preparation for W3C and DIF submissions
Conference attendance (OpenSSF Europe March 2026, RSA, EU AI Summit) for standards alignment and design partner acquisition
Fractional security advisor (part-time, 3 months) to validate protocol design against enterprise threat models
With minimum funding ($10K): External security audit of the identity verification and Merkle tree modules + 3 months infrastructure costs. This unblocks open-sourcing the core protocol with confidence that the cryptographic foundations are sound.
With full funding ($50K): Complete security audit across all cryptographic components, 6 months infrastructure, fractional security advisor (3 months), protocol spec publication, standards body submissions, and travel to 2 conferences for design partner acquisition. This covers the full path from audited open-source release through standards submission to first enterprise design partner conversations — all before the EU AI Act enforcement date in August 2026.
Solo founder: Abdul Karim Moro — 5 years as a blockchain and security engineer, now applying cryptographic trust infrastructure to AI agents.
Track record building production trust systems:
Built the Soulbound Token credential system for Chung-Ang University (Korea) — non-transferable digital credentials on-chain with selective disclosure. This is the same identity-binding architecture now powering Tessorium's agent identity layer.
Designed cryptographic authentication at Hyperring — ran 18 months with zero security breaches.
Deployed TON blockchain infrastructure serving 70,000+ users at 99.9% uptime.
Audited DeFi smart contracts, optimized gas costs by 40%, reduced blockchain query latency by 60% through custom indexing.
EIP-712 signature authentication and multi-sig wallet implementations in production.
Tessorium-specific execution:
Built the complete protocol stack in under a month of nights and weekends while working full-time — Ed25519 identity, three-component trust scoring, policy negotiation engine, Merkle tree audit trails, BFT consensus, 60+ API endpoints, 2,400+ passing tests.
Live product, interactive protocol demo, and operator console all deployed and publicly accessible today.
BS Computer Science, University of Seoul. Based in Seoul, South Korea. Self-taught Korean to TOPIK Level 5 (advanced fluency) in 6 months to get here — same obsessive execution pattern applied to every domain I enter.
Most likely failure mode: enterprises deprioritize agent trust infrastructure and treat AI security as a feature of their existing platforms rather than a protocol-level need. If the EU AI Act enforcement gets delayed (the European Commission proposed a possible extension to December 2027 in their Digital Omnibus package), urgency drops and enterprises wait rather than adopt.
Second failure mode: a well-funded competitor ($30B+ in M&A is reshaping this space right now) ships a proprietary "trust layer" that gains adoption through distribution advantages before an open standard can establish network effects. The risk isn't that someone builds what we build — it's that enterprises settle for something worse because it came bundled with their existing vendor.
Third failure mode: the protocol design has a flaw that only surfaces under real enterprise workloads — edge cases in cross-org trust negotiation, trust score gaming, or kill switch latency under high concurrency. This is exactly why the security audit funded by this grant matters.
What survives if the project fails:
The open-source protocol specification and codebase remain as public goods (MIT licensed). The cryptographic primitives — Ed25519 agent identity, Merkle tree audit trails, dynamic trust scoring — are reusable by any project or standards body. W3C, IETF, and DIF are all actively developing agent trust frameworks with no reference implementation. Even if Tessorium the company fails, the protocol informs those standards.
The security audit funded by this grant produces a public artifact — an independently reviewed cryptographic identity module any open-source AI safety project can adopt. Nothing funded here is wasted even in the worst case.
$0. Entirely self-funded. The complete protocol stack — 2,400+ tests, 60+ API endpoints, live product, interactive demos — was built before seeking any external capital.
This Manifund application and a concurrent application to the Foresight Institute AI for Science & Safety Nodes program are our first external funding requests.