Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

TELOS: Runtime Governance Infrastructure for AI Systems

Science & technologyTechnical AI safety
JB avatar

Jeffrey Brunner

ProposalGrant
Closes February 21st, 2026
$0raised
$5,000minimum funding
$25,000funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

TELOS provides runtime governance infrastructure for AI systems, continuous measurement and enforcement of behavioral boundaries during deployment, not just training. We apply control theory and statistical process control to treat conversational drift as a measurable process variable requiring real-time oversight.
What are this project's goals? How will you achieve them?

 Establish TELOS as validated, peer-reviewed methodology for runtime AI governance.

- Expand adversarial validation beyond current 1,300-prompt dataset

- Publish methodology and results through arXiv and conference presentation (FAccT or AIES)

- Partner with academic institution for independent replication study

- Maintain open datasets and reproduction code for community verification

How will this funding be used?

- Expanded validation / compute: $10,000 — Additional adversarial testing across model providers, edge case exploration

- Publication + conference: $5,000 — arXiv fees, conference registration and travel

- Academic collaboration: $10,000 — Independent replication study partnership

Who is on your team? What's your track record on similar projects?

Jeffrey Brunner — Founder/CEO, TELOS AI Labs Inc. 30 years in education, Lean Six Sigma Black Belt. Technical development, validation research, regulatory engagement https://www.linkedin.com/in/brunnerjf/

Jennifer Brunner — Co-founder, Operations & Partnerships. Business administration, partnership development, operations management. https://www.linkedin.com/in/jenniferbrunner2014/

TELOS AI Labs is a Delaware C-Corp with joint majority ownership.

Track record:

- Validated TELOS against 1,300 adversarial attacks (HarmBench + MedSafetyBench) achieving 0% attack success rate vs 30.8% baseline

- Published open datasets: https://doi.org/10.5281/zenodo.18013104, https://doi.org/10.5281/zenodo.18009153

- Open source code: https://github.com/TelosStward/TELOS

- Live beta deployment: https://beta.telos-labs.ai

What are the most likely causes and outcomes if this project fails?

Most likely failure modes:

- Academic partners decline collaboration due to unfamiliarity with approach

- Expanded adversarial testing reveals edge cases that degrade performance metrics

- Publication rejected or delayed beyond useful timeline

Outcomes if failed:

- Validation data and code remain publicly available for others to build on

- Methodology documented regardless of adoption

- Funds used transparently on stated purposes with public accounting

Failure would delay but not eliminate the research contribution. The infrastructure exists; this funding accelerates validation and credibility.

How much money have you raised in the last 12 months, and from where?

Self-funded to date. Currently bootstrapped.

Pending applications:

- Long Term Future Fund: $75K-$150K (passed first round, awaiting decision)

- Survival and Flourishing Fund: $200K (application completed)

- Coefficient Giving: $200K-$300K (submitted)

CommentsOffersSimilar4
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
Lisa-Intel avatar

Pedro Bentancour Garin

Global Governance & Safety Layer for Advanced AI Systems - We Stop Rogue AI

Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 raised
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
2
1
$0 raised
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised