Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

AURA Protocol: Measurable Alignment for Autonomous AI Systems

Lycheetah avatar

Mackenzie Conor James Clark

ProposalGrant
Closes January 28th, 2026
$0raised
$15,000minimum funding
$75,000funding goal

Offer to donate

31 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

AURA Protocol is an open-source AI alignment framework that addresses agentic drift by introducing internal alignment mechanisms, measurable ethics metrics, and correction protocols. The project formalizes alignment as a dynamic, testable system rather than a static ruleset, enabling AI systems to self-detect and correct misalignment under uncertainty and long time horizons.

What are this project's goals? How will you achieve them?

Goals:

Formalize alignment as a measurable, dynamic process rather than static constraints

Design and validate the TRIAD kernel (Anchor, Ascent, Fold) for internal alignment control

Develop quantitative metrics for ethical stability and drift detection

Demonstrate correction mechanisms in simulated agent environments

How:

Define formal metrics (e.g. alignment stability, value retention, policy deviation)

Implement drift-detection logic within controlled simulations

Stress-test agents under distributional shift and long-horizon tasks

Publish specifications, results, and tooling openly for replication and critique

Success is measured by reproducible experiments showing earlier detection and correction of alignment failures compared to baseline approaches.

How will this funding be used?

Funding will support:

Dedicated research time to formalize and test alignment mechanisms

Simulation and experimentation infrastructure

Documentation, specifications, and reproducible open-source releases

This funding directly converts existing independent work into rigorously validated, externally reviewable research.

How will this funding be used? (cost breakdown)

~80% — Research stipend (full-time focus on formalization, experiments, writing)

~15% — Compute, tooling, and experiment infrastructure

~5% — Contingency and administrative costs

No funds are allocated to marketing or non-research activities.

Who is on your team? What's your track record on similar projects?

This project is currently led by me as an independent researcher.

Track record includes:

Designing and publishing a substantial open-source alignment framework

Producing formal specifications, mathematical models, and structured documentation

Sustained independent execution without institutional backing

Public repositories demonstrating follow-through and iteration over time

The work already exists in prototype form; funding increases rigor, depth, and validation.

What are the most likely causes and outcomes if this project fails?

Likely causes:

Metrics fail to generalize across agent types

Drift signals prove too noisy under certain environments

Limited compute constrains experimental breadth

Outcomes if it fails:

Partial insights into why certain alignment metrics are insufficient

Open documentation of negative results

Clear guidance for future alignment research

Even failure produces valuable, publishable information for the AI safety community.

How much money have you raised in the last 12 months, and from where?


$0.

This project has been entirely self-funded to date.

CommentsOffers

There are no bids on this project.