You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
This project develops Lisa Intel, a non-agentic governance and control layer for deployed AI systems in regulated and high-risk environments. The goal is to make advanced AI systems observable, auditable, and interruptible at runtime, independent of the underlying model.
The project focuses on building a minimal but robust MVP and validating it through early pilot use-cases with institutional stakeholders.
Goals
Build a working MVP of a post-deployment AI governance layer
Validate the architecture through simulated adversarial and failure scenarios
Prepare the system for 1–2 early institutional or regulated pilot deployments
How
Implement a non-agentic control layer that enforces policy-bounded actions (constraints, escalation, logging, override)
Integrate runtime observability and deterministic control paths
Stress-test the system using adversarial simulation inspired by recent AI attack literature
Translate validated architecture into a deployable pilot-ready system
The project treats AI safety as an infrastructure and execution problem, not a model-alignment problem.
Requested funding: $100,000
Minimum viable funding: $30,000
Proposed budget (for $100k):
$55k — Engineering & development
(core MVP build, systems integration, testing)
$20k — Security simulations & evaluation
(adversarial testing, robustness validation)
$15k — IP, legal, and standards groundwork
(patent work, documentation for standardization)
$10k — Infrastructure & tooling
(cloud, development, and testing environments)
If funded at $30–50k, scope is reduced to a narrower MVP + simulations.
Founder: Pedro Bentancour Garin
I have an interdisciplinary background spanning engineering, political science, philosophy, and doctoral-level research in the humanities, with a long-term focus on power, governance, and control systems.
Previously, I founded Treehoo, an early sustainability-focused internet platform with users in 170+ countries, and was a finalist at the Globe Forum in Stockholm (2009) alongside companies such as Tesla.
My academic work has been supported by 15+ competitive research grants, including funding from the Royal Swedish Academy of Sciences, and involved research stays at institutions such as Oxford University, the Getty Center (LA), the University of Melbourne, and the Vatican.
I am currently supported by an experienced strategy and fundraising advisor.
Likely causes
Insufficient engineering capacity to fully realize the architecture
Slower-than-expected engagement from institutional pilot partners
Regulatory timelines shifting faster or slower than anticipated
Outcomes Even in failure, the project produces:
A validated architectural framework for post-deployment AI governance
Security and robustness insights relevant to AI safety research
Documentation and IP that can inform future standards or related projects
Failure does not meaningfully increase risk; the system is non-agentic and non-deploying by default.
$0.
The project has been developed founder-led to date, without external funding.