Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
0

A Global System to Govern, Align and Keep AI Safe.

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
Lisa-Intel avatar

Pedro Bentancour Garin

ProposalGrant
Closes January 31st, 2026
$0raised
$30,000minimum funding
$100,000funding goal

Offer to donate

9 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

This project develops Lisa Intel, a non-agentic governance and control layer for deployed AI systems in regulated and high-risk environments. The goal is to make advanced AI systems observable, auditable, and interruptible at runtime, independent of the underlying model.

The project focuses on building a minimal but robust MVP and validating it through early pilot use-cases with institutional stakeholders.

What are this project's goals? How will you achieve them?


Goals

Build a working MVP of a post-deployment AI governance layer

Validate the architecture through simulated adversarial and failure scenarios

Prepare the system for 1–2 early institutional or regulated pilot deployments

How

Implement a non-agentic control layer that enforces policy-bounded actions (constraints, escalation, logging, override)

Integrate runtime observability and deterministic control paths

Stress-test the system using adversarial simulation inspired by recent AI attack literature

Translate validated architecture into a deployable pilot-ready system

The project treats AI safety as an infrastructure and execution problem, not a model-alignment problem.

How will this funding be used?


Requested funding: $100,000

Minimum viable funding: $30,000

Proposed budget (for $100k):

$55k — Engineering & development

(core MVP build, systems integration, testing)

$20k — Security simulations & evaluation

(adversarial testing, robustness validation)

$15k — IP, legal, and standards groundwork

(patent work, documentation for standardization)

$10k — Infrastructure & tooling

(cloud, development, and testing environments)

If funded at $30–50k, scope is reduced to a narrower MVP + simulations.

Who is on your team? What's your track record on similar projects?


Founder: Pedro Bentancour Garin

I have an interdisciplinary background spanning engineering, political science, philosophy, and doctoral-level research in the humanities, with a long-term focus on power, governance, and control systems.

Previously, I founded Treehoo, an early sustainability-focused internet platform with users in 170+ countries, and was a finalist at the Globe Forum in Stockholm (2009) alongside companies such as Tesla.

My academic work has been supported by 15+ competitive research grants, including funding from the Royal Swedish Academy of Sciences, and involved research stays at institutions such as Oxford University, the Getty Center (LA), the University of Melbourne, and the Vatican.

I am currently supported by an experienced strategy and fundraising advisor.

What are the most likely causes and outcomes if this project fails?


Likely causes

Insufficient engineering capacity to fully realize the architecture

Slower-than-expected engagement from institutional pilot partners

Regulatory timelines shifting faster or slower than anticipated

Outcomes Even in failure, the project produces:

A validated architectural framework for post-deployment AI governance

Security and robustness insights relevant to AI safety research

Documentation and IP that can inform future standards or related projects

Failure does not meaningfully increase risk; the system is non-agentic and non-deploying by default.

How much money have you raised in the last 12 months, and from where?


$0.

The project has been developed founder-led to date, without external funding.

CommentsOffersSimilar6
Lisa-Intel avatar

Pedro Bentancour Garin

Global Governance & Safety Layer for Advanced AI Systems - We Stop Rogue AI

Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 raised
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
mbuch avatar

Murray Buchanan

Live Governance

Leveraging AI to enable coordination without demanding centralization

AI governanceGlobal catastrophic risks
1
6
$3K raised
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
2
1
$0 raised