Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
2

Formal Certification Technologies for AI Safety

Science & technologyTechnical AI safetyAI governance
🦀

Mirco Giacobbe

ActiveGrant
$128,000raised
$128,000funding goal
Fully funded and not currently accepting donations.

Project summary

The project aims to advance both the theory and implementation of formal proof certificates for complex specifications that define the safety of probabilistic cyber-physical systems.

On the theoretical side, it will establish a mathematical framework providing formal guarantees for safety and reliability of cyber-physical and software systems, with broad applications to domains such as supply chains, power grids, autonomous robotics, avionics, and beyond.

On the implementation side, the team will develop software tools that automate both the generation of proof obligations for the safety verification of reactive and cyber-physical AI systems and the validation of proof certificates that satisfy these obligations.

This work will lay the foundations for a new generation of trustworthy AI technologies, in which systems are capable of producing formal proofs of probabilistic compliance with safety specifications. Our new theory will enable the design of learnable and efficiently checkable proofs of correctness for stochastic, reactive, cyber-physical systems. Our software will integrate automated reasoning with machine learning infrastructures, bridging the gap between theoretical verification and practical tools for AI safety.

What are this project's goals? How will you achieve them?


Our goal is to overcome the curse of dimensionality in the formal verification of complex cyber-physical AI systems. The outcome with be a theoretical framework and a software tool for the safety assurance of AI systems, with mathematical certainty.

How will this funding be used?

The funds will provide cash-floats to support research scientists and engineers in the newly founded non-profit organisation Zeroth Research, based in Birmingham, UK, plus travel and compute, enabling the team to achieve their objectives and advance the projects’ goals. As our grant payments on two existing projects are currently in arrears, this funding will allow us to pay salaries, bills, and subcontractors on time while continuing our work beyond the span of the project.

Who is on your team? What's your track record on similar projects?


The team consists on global experts in theory and practice of formal verification of cyber-physical systems and software systems, cyber-security and theorem proving: Luca Arnaboldi, Pascal Berrang, Marco Casadio, Marek Chalupa, Mirco Giacobbe, Edoardo Manino, Greg Neustroev, Simon Schmidt, and Ayberk Tosun at Zeroth Research. The broader team includes the external collaborators Alberto Griggio, Giulia Sindoni, Fajar Haifani, and Stefano Tonetta at FBK, Trento, Italy, and Diptarko Roy at the University of Birmingham, UK. The team has an extensive track record in the development of formal verification tools including the award-winning software verifier Symbiotic, the industry-grade SMT-solver MathSAT, and many more.

What are the most likely causes and outcomes if this project fails?

The curse of dimensionality of complex systems is an our biggest challenge. Nevertheless, several preliminary results published at flagship venues in AI and formal verification such as AAAI, NeurIPS, CAV have already demonstrated the clear potential of proof-certificate-based verification across applications, from automatic control to hardware design. Our project will test these new technical hypotheses at scale.

How much money have you raised in the last 12 months, and from where?


GBP 1,168,007 from the Advanced Research and Invention Agency.

CommentsDonations1Similar8
🍇

Jade Master

SDCPNs for AI Safety

Developing correct-by-construction world models for verification of frontier AI

Science & technologyTechnical AI safetyGlobal catastrophic risks
2
0
$39K raised
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
2
$100K raised
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $125K
orpheus avatar

Orpheus Lummis

Guaranteed Safe AI Seminars 2026

Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.

Technical AI safetyAI governanceGlobal catastrophic risks
5
3
$30K raised
agusmartinez92 avatar

Agustín Martinez Suñé

SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents

Seeking funding to develop and evaluate a new benchmark for systematically assesing safety of LLM-based agents

Technical AI safetyAI governance
5
2
$1.98K raised
RCS-architect avatar

Arifa Khan

Preventing AI Catastrophe Through Economic Mechanisms

The Reputation Circulation Standard - Implementation Sprint

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
rguerreschi avatar

Rufo guerreschi

Coalition for a Baruch Plan for AI

Catalyzing a uniquely bold, timely and effective treaty-making process for AI

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised