Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Runtime Governance for Advanced AI Systems

Technical AI safetyAI governanceGlobal catastrophic risks
Lisa-Intel avatar

Pedro Bentancour Garin

ProposalGrant
Closes March 8th, 2026
$0raised
$500minimum funding
$200,000funding goal

Offer to donate

27 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Lisa Intel is building a practical execution-layer for AI safety and governance.

Current AI governance focuses on pre-deployment evaluation, documentation, and compliance. However, many of the most serious risks emerge during execution, when systems operate autonomously, interact with real environments, or are repurposed beyond their original intent.

This project develops and validates a runtime governance and safety layer that enables measurable control, observability, and intervention in advanced AI systems while they are operating. The goal is to make AI systems not just compliant on paper, but governable in practice.

What are this project's goals? How will you achieve them?

Goals:

  1. Build a functional prototype of a runtime AI governance layer that can:

    ▪︎ Monitor execution behavior

    ▪︎ Enforce constraints dynamically

    ▪︎ Provide measurable safety and accountability signals

  2. Demonstrate that governance at execution time is technically feasible, auditable, and scalable.

  3. Publish open technical documentation and evaluation results so others can verify, critique, and build upon the work.

How we will achieve this:

▪︎ Design and implement a minimal but robust runtime control architecture focused on:

° Authorization at execution time

° Context-aware constraint enforcement

° Continuous observability and logging

▪︎ Test the system against realistic agentic and autonomous AI use cases where static safeguards are known to fail.

▪︎ Define concrete, measurable outcomes (e.g. reduction of unauthorized actions, response latency to violations, audit completeness).

The emphasis is not on theoretical alignment, but on operational safety mechanisms that work under real conditions.

How will this funding be used?

The requested $200,000 will be used over approximately 9–12 months for:

▪︎ Core technical development

Focused engineering work to build and test the runtime governance prototype.

▪︎ Safety and evaluation work

Designing measurable safety metrics and running controlled tests against real execution scenarios.

▪︎ Documentation and transparency

Publishing clear technical documentation, evaluation results, and failure analyses.

▪︎ Minimal operational costs

Infrastructure, security review, and limited external expertise where required.

No funds are allocated to marketing, tokenization, or speculative activities. The funding is strictly for building and validating the system.

Who is on your team? What's your track record on similar projects?


Founder: Pedro Bentancour Garin

I have an interdisciplinary background spanning engineering, political science, philosophy, and doctoral-level research in the humanities, with a long-term focus on power, governance, and control systems.

Previously, I founded Treehoo, an early sustainability-focused internet platform with users in 170+ countries, and was a finalist at the Globe Forum in Stockholm (2009) alongside companies such as Tesla.

My academic work has been supported by 15+ competitive research grants, including funding from the Royal Swedish Academy of Sciences, and involved research stays at institutions such as Oxford University, the Getty Center (LA), the University of Melbourne, and the Vatican.

I am currently supported by an experienced strategy and fundraising advisor.

What are the most likely causes and outcomes if this project fails?

Most likely causes

▪︎ Technical complexity proves higher than anticipated for a first-phase prototype.

▪︎ Integration challenges with real-world AI systems limit early demonstrations.

Outcomes if it fails

▪︎ Partial but still valuable outputs: architectural insights, failure analyses, and documented constraints of runtime governance.

▪︎ Open publication of results so others in the AI safety community can learn from what did and did not work.

Even in failure, the project produces informative negative results, which are currently underrepresented in AI safety work.

How much money have you raised in the last 12 months, and from where?


$0.

The project has been developed founder-led to date, without external funding.

CommentsOffersSimilar5
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
2
1
$0 raised
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
mbuch avatar

Murray Buchanan

Live Governance

Leveraging AI to enable coordination without demanding centralization

AI governanceGlobal catastrophic risks
1
6
$3K raised