You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Lisa Intel is building a practical execution-layer for AI safety and governance.
Current AI governance focuses on pre-deployment evaluation, documentation, and compliance. However, many of the most serious risks emerge during execution, when systems operate autonomously, interact with real environments, or are repurposed beyond their original intent.
This project develops and validates a runtime governance and safety layer that enables measurable control, observability, and intervention in advanced AI systems while they are operating. The goal is to make AI systems not just compliant on paper, but governable in practice.
Goals:
Build a functional prototype of a runtime AI governance layer that can:
▪︎ Monitor execution behavior
▪︎ Enforce constraints dynamically
▪︎ Provide measurable safety and accountability signals
Demonstrate that governance at execution time is technically feasible, auditable, and scalable.
Publish open technical documentation and evaluation results so others can verify, critique, and build upon the work.
How we will achieve this:
▪︎ Design and implement a minimal but robust runtime control architecture focused on:
° Authorization at execution time
° Context-aware constraint enforcement
° Continuous observability and logging
▪︎ Test the system against realistic agentic and autonomous AI use cases where static safeguards are known to fail.
▪︎ Define concrete, measurable outcomes (e.g. reduction of unauthorized actions, response latency to violations, audit completeness).
The emphasis is not on theoretical alignment, but on operational safety mechanisms that work under real conditions.
The requested $200,000 will be used over approximately 9–12 months for:
▪︎ Core technical development
Focused engineering work to build and test the runtime governance prototype.
▪︎ Safety and evaluation work
Designing measurable safety metrics and running controlled tests against real execution scenarios.
▪︎ Documentation and transparency
Publishing clear technical documentation, evaluation results, and failure analyses.
▪︎ Minimal operational costs
Infrastructure, security review, and limited external expertise where required.
No funds are allocated to marketing, tokenization, or speculative activities. The funding is strictly for building and validating the system.
Founder: Pedro Bentancour Garin
I have an interdisciplinary background spanning engineering, political science, philosophy, and doctoral-level research in the humanities, with a long-term focus on power, governance, and control systems.
Previously, I founded Treehoo, an early sustainability-focused internet platform with users in 170+ countries, and was a finalist at the Globe Forum in Stockholm (2009) alongside companies such as Tesla.
My academic work has been supported by 15+ competitive research grants, including funding from the Royal Swedish Academy of Sciences, and involved research stays at institutions such as Oxford University, the Getty Center (LA), the University of Melbourne, and the Vatican.
I am currently supported by an experienced strategy and fundraising advisor.
Most likely causes
▪︎ Technical complexity proves higher than anticipated for a first-phase prototype.
▪︎ Integration challenges with real-world AI systems limit early demonstrations.
Outcomes if it fails
▪︎ Partial but still valuable outputs: architectural insights, failure analyses, and documented constraints of runtime governance.
▪︎ Open publication of results so others in the AI safety community can learn from what did and did not work.
Even in failure, the project produces informative negative results, which are currently underrepresented in AI safety work.
The project has been developed founder-led to date, without external funding.