You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Bridge OS is governance middleware that enforces human oversight over AI at the schema level — not through prompts, training, or behavioral guidelines, but through architectural gates the system cannot bypass. There is no code path around the gate.
Every AI action passes through a deterministic governance engine:
-Green Light: Simple routine actions proceed
-Yellow Light: Ambiguous prompts and potential uncertainties are detected and flagged for review
-Red Light: Irreversible decisions require human confirmation
This is enforced at the schema level; JSON schemas that make the human gate structurally non-bypassable. The governance engine is a pure function: no IO, no state mutation, no side effects. Every evaluation produces a deterministic SHA-256 audit hash. The system is forensically auditable by design.
The .03 Principle — the mathematical and empirical foundation of this architecture — holds that any system completing past 97% of a task autonomously destroys the meaning it claims to preserve. This appears across domains: Shannon's information theory (a message with no uncertainty carries no information), Gödel's incompleteness (no system can verify itself from within), cardiac diastole (the heart spends 63% of its cycle in a pause state to recover), and neural refractory periods (neurons must rest before firing again). The 3% is not an inefficiency. It's where human judgement, oversight, and authority enter a system that's designed to complete to 100%.
Our empirical research (Artificial Diastole) tested this directly: AI outputs built with structured pauses were preferred by blind evaluators 35 out of 38 times across Claude and GPT-4.1 model families with a 96% preference rate. The pause isn't a limitation; it gives LLMs the room to produce better output.
Bridge OS does not compete with AI systems, It's meant to govern them. It sits as a middleware so that any AI application can integrate it through a REST API. The architecture is model-agnostic, framework-agnostic, and designed for regulated industries where accountability is non-negotiable and demand for safety is at an all-time high: healthcare, finance, legal, and autonomous operations.
Our engine is built with core tests passing, currently hardening the API for public deployment.
Important Links
Github: https://github.com/Griffinwalters-Bridgetech/Bridge-os-governance
Landing Page: https://jade-skunk-b24.notion.site/Bridge-Technologies-99-97-Labs-2f00115b08fc80c28b67eab5d82bb259
Deploy Bridge OS as a publicly callable governance API so that any AI application can enforce human oversight at the schema level.
Our goals for the next 3-6 months and how we'll achieve them:
Deploy the governance API as a hosted public service — The engine exists and passes core tests locally. The next step is deploying it to cloud infrastructure with public documentation and an SDK so any developer can integrate governance into their AI application with a single API call.
Complete patent continuation — A provisional patent is filed covering the .03 Principle and the architectural gate pattern. Funding enables continuation to a full filing, protecting the core IP before open-sourcing the engine.
Publish Artificial Diastole research — Our empirical findings (96% preference rate for structured pause across two model families) need formal writeup and journal submission. This validates the science behind the architecture and gives the broader AI safety community a reproducible methodology.
Open-source the core governance engine — Once the API is deployed and the patent is secured, we open-source the engine for community audit, adoption, and contribution. Governance infrastructure should be a public good.
How we'll know we're on track: The API is live, documented, and callable. At least one external developer has integrated it. The patent is filed. The research paper is submitted. The engine is public on GitHub with a clean build
Core budget:
API deployment infrastructure — cloud hosting, CI/CD pipeline, security audit ($6,000)
Legal — provisional to full patent filing, organizational inception ($4,000)
Research documentation — Artificial Diastole paper, journal submission, SDK docs ($1,500)
Engineering collaboration — distributed team, API hardening, test coverage ($2,500)
Funding above this accelerates the path to sustainability: runway to execute full-time, pilot integrations with regulated-industry partners, and development toward a revenue-generating business model. The goal is a self-sustaining company, not indefinite grant dependency.
All work to date has been self-funded with no external funding received.
Griffin Walters — Founder. MBA and Master's in Marketing with a focus on AI co-creation and trust. Professional background in technological integration and operations. Former Division 1 and semi-professional athlete. Built the governance engine, filed the provisional patent, conducted the Artificial Diastole research, and wrote 8 whitepapers across domain subjects. First-time founder shipping fast with no external funding.
Niall Peters — Head Engineer (UK-based). Computer graphics specialist handling purity testing, code review, and API hardening. Validates that the engine stays pure with no side effects, no mutation, nor IO leakage.
Alexander Nails — Technical Advisor. Machine learning compiler engineer. Reviews architecture decisions and provides engineering guidance on ML dynamics.
Track record: Founded December 2025. In 11 weeks with zero funding: built a working governance engine with all core tests passing, wrapped it in a REST API with Zod validation and SHA-256 audit trails, filed a provisional patent, ran empirical research showing 96% evaluator preference across two model families, and assembled a distributed team across two countries.
We built the product before asking for money.
Most likely causes of failure:
Adoption friction — Developers building AI applications may not prioritize governance integration until regulation forces them to. If demand doesn't materialize organically, the API exists but no one calls it.
Funding gap — Without runway, the founder splits time between income work and development. Progress slows from weeks to months. The architecture is sound but deployment stalls.
Team capacity — A 3-person distributed team with no FTEs has limited bandwidth. If a key contributor becomes unavailable, timelines stretch.
What failure does NOT look like: the core insight being wrong. The .03 Principle is mathematically and empirically grounded. The outcomes of our research of structures pauses in output still remain. The governance engine passes its tests. Even if Bridge/99.97 Labs as a company fails, the architecture, the research, and the open-source engine remain available for others to build on.
Most likely outcome if the project fails: the work gets absorbed. The whitepapers enter the AI safety literature. The open-source engine sits on GitHub for someone else to deploy when the regulatory environment catches up. The worst case is not that the work disappears; it's that it arrives later than it should have, or other governance is implemented with shakier foundations.
$0. All work to date has been self-funded. No external funding received.
Active applications pending:
EA Funds Long-Term Future Fund — $28,000 (submitted February 2026)
Foresight AI for Safety & Science Nodes — $23,000 (submitted January 2026)
Anthropic Anthology Fund / Menlo Ventures — accelerator program (submitted February 2026)