Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
0

Immediate Action System — Open-Hardware ≤10 ns ASI Kill Switch Prototype ($150k)

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
Covenant-Architects avatar

Sean Sheppard

ProposalGrant
Closes December 19th, 2025
$0raised
$150,000minimum funding
$150,000funding goal

Offer to donate

28 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

First open-hardware guard circuit that removes power from an ASI cluster in ≤10 ns using physics, not software — the only containment primitive that survives superintelligence.

Project page / repo
https://github.com/CovenantArchitects/The-Partnership-Covenant

Key deliverable
Fully functional discrete prototype board + third-party verification by Q1 2026

Why this matters
Every existing stop button, encrypted gradient, or human veto fails the moment the system is smarter than its safeguards. IAS moves the final actuator outside the threat model entirely.

Budget breakdown
• $80–120k: discrete prototype board + GaN power stage
• $20k: red-team bounty payouts + formal verification
• $10k: legal/open-hardware licensing + documentationTeam
Sean Sheppard (sole founder, shipped full spec day-zero) + inbound hardware engineers from today’s launch.

PDF preprint
https://github.com/CovenantArchitects/The-Partnership-Covenant/releases/download/v1.0/IAS-preprint-v1.0.pdf

CommentsOffersSimilar6
RCS-architect avatar

Arifa Khan

Preventing AI Catastrophe Through Economic Mechanisms

The Reputation Circulation Standard - Implementation Sprint

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
Lisa-Intel avatar

Pedro Bentancour Garin

Global Governance & Safety Layer for Advanced AI Systems - We Stop Rogue AI

Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $350K
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $125K
Capter avatar

Furkan Elmas

Exploring a Single-FPS Stability Constraint in LLMs (ZTGI-Pro v3.3)

Early-stage work on a small internal-control layer that tracks instability in LLM reasoning and switches between SAFE / WARN / BREAK modes.

Science & technologyTechnical AI safety
1
2
$0 / $25K
rguerreschi avatar

Rufo guerreschi

Coalition for a Baruch Plan for AI

Catalyzing a uniquely bold, timely and effective treaty-making process for AI

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
🍓

James Lucassen

More Detailed Cyber Kill Chain For AI Control Evaluation

Extending an AI control evaluation to include vulnerability discovery, weaponization, and payload creation

Technical AI safety
4
4
$0 raised