Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
0

FrameworkZero: International Hybrid-Technical AI Governance DAO

Technical AI safetyAI governanceGlobal catastrophic risks
davidgi avatar

David Giagnocavo

ProposalGrant
Closes November 12th, 2025
$0raised
$18,000minimum funding
$23,000funding goal

Offer to donate

37 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

A sociotechnical AI governance framework providing the tools and infrastructure to establish standardized safety-first global AI development.

It seeks to be an actionable international AI governance solution utilizing open-source decentralized blockchain technology, collaborative verifiable safety, and automated technical enforcement with human-in-the-loop orchestration. It is ultimately multilaterally developed and operated - enforced by nation states, adopted by the AI industry and integrated into AI-compute datacenters. It is not a for-profit project, neither is it a blockchain project with tradable tokens.

Further details can be found at: FrameworkZero.org

What are this project's goals? How will you achieve them?

This project's mission is to bring into being standardized safety-first global AI development that is mutually advantageous.

  • Tone down dangerous national & market AI race dynamics.

  • Empower collaborative AI safety research and progress.

  • Establish agreed-upon red lines for frontier AI development.

  • Provide secure verifiable infrastructure & standardization.

Its mission is achieved by enabling global coordination between nations, policy makers, safety experts and industry with tiered voting, collaborative AI safety research and adversarial hardened open-source development.

For the scope of this proposal, the goal is to draw interest from the AI safety community, and to demonstrate how blockchain technologies can uniquely solve key issues in AI governance and support global AI safety.

How will this funding be used?

This is an ambitious project that will require global effort and substantial funding far outside the scope of this proposal. Therefore, this proposal's funding will be used to further refine the concept, cultivate international will and advocate for its funding and development.

Funding allocation:

  • Professional introduction video - $2-5k

  • Promotional expenses - $5k

  • Presentation at an AI safety conference (Europe) - $3-4k

  • Incentivized peer review by AI safety researchers and blockchain industry professionals - $500 per review, 4 reviewers

  • Minimal base funding for principal team member - $1k/month for 6 months

Minimum budget: $18k

Given subsequent funding, it will expand it's team to lay the initial groundwork for the framework's architecture.

Who is on your team? What's your track record on similar projects?

Currently the team is compromised of a software developer and a proposal advisor.

David Giagnocavo (linkedin.com/in/david-giagnocavo) is a generalist software developer with over 15 years of experience. Apart from co-founding several tech startups, he was the principal contributor in the SciThereum project, a scientific communication DAO. SciThereum did not acquire funding and was discontinued in early 2023 (figma.com/board/spae5pctjAE6eSfhQ1ngRt/SciThereum).

Dr. Tamiana Tran (linkedin.com/in/tamiana-tam-tran-ph-d-41727452) is an experienced PhD researcher in the field of synthetic molecular biology. She has a recognized and published track record in scientific research, has been involved in many government-funded research projects, and serves as an advisor to a WHO antimicrobial resistance initiative.

What are the most likely causes and outcomes if this project fails?

  • Shortcomings in the technical AI governance research required to support this project, such as specialized hardware, Trusted Execution Environments, and Trusted Capable Model Environments.

  • Challenges set forth by the recently stated US presidential stance opposing international AI governance as well as lobbying from AI companies. It therefore shares the struggles of many international AI governance efforts and may fail to generate the international will required for its actualization.

How much money have you raised in the last 12 months, and from where?

This project has not previously sought funding and has been personally funded by the team members.

CommentsOffersSimilar6

No comments yet. Sign in to create one!