Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

AURA Protocol: Measurable Alignment for Autonomous AI Systems

Science & technologyTechnical AI safetyAI governance
Lycheetah avatar

Mackenzie Conor James Clark

ProposalGrant
Closes January 28th, 2026
$0raised
$15,000minimum funding
$75,000funding goal

Offer to donate

31 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

This project develops AURA Protocol, an open-source framework for AI alignment that treats alignment as a measurable, dynamic control problem. It introduces internal mechanisms for detecting and correcting agentic drift under uncertainty, long time horizons, and distributional shift, supported by formal metrics and reproducible experiments.

What are this project's goals? How will you achieve them?

Goals:

Define alignment as a measurable internal property rather than a static rule set

Formalize the TRIAD kernel (Anchor, Ascent, Fold) as an internal alignment control mechanism

Develop quantitative metrics for detecting value drift and loss of intent coherence

Empirically test drift detection and correction in simulated agent environments

How they will be achieved:

Formal mathematical specification of alignment variables and stability conditions

Implementation of drift-detection logic inside controlled simulations

Stress-testing agents under long-horizon and adversarial scenarios

Publishing open-source code, documentation, and negative results for replication

Success is measured by reproducible evidence that AURA detects and mitigates misalignment earlier than baseline approaches.

How will this funding be used?

Funding will be used to support focused research time, experimentation, and validation. Specifically, it enables deeper formalization of alignment mechanisms, implementation of simulations, and production of high-quality open documentation and tooling suitable for external review and reuse.

Who is on your team? What's your track record on similar projects?

This project is currently led by me as an independent researcher.

My track record includes:

Designing and publishing a substantial open-source AI alignment framework

Producing formal specifications, mathematical models, and structured research artifacts

Sustained independent execution without institutional support

Public repositories demonstrating iteration, follow-through, and technical depth

The project already exists in working form; funding enables higher rigor, validation, and broader impact.

What are the most likely causes and outcomes if this project fails?

Most likely causes:

Alignment metrics do not generalize across agent classes

Drift signals are noisy under certain environments

Limited compute restricts experimental scope

Outcomes if it fails:

Clear documentation of why specific metrics or mechanisms failed

Publishable negative results that inform future alignment research

Open artifacts that others can build upon or improve

Even failure yields valuable information for the AI safety community.

How much money have you raised in the last 12 months, and from where?


$0.

This project has been entirely self-funded to date.

P.s

With minimum funding, I will complete formal specifications, implement a prototype drift-detection system, and publish initial experimental results. With full funding, I will extend experiments across multiple agent classes, improve robustness testing, and produce publishable-quality artifacts suitable for wider adoption.

CommentsOffers

There are no bids on this project.