Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Terminal Boundary Systems and the Limits of Self-Explanation

Science & technologyTechnical AI safetyGlobal catastrophic risks
🐰

Avinash A

ProposalGrant
Closes January 26th, 2026
$0raised
$15,000minimum funding
$30,000funding goal

Offer to donate

18 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary


Project Summary: I am an independent researcher who has developed the ASE (Absolute Self-Explanation) Impossibility Theorem. Using Symmetric Monoidal Closed Categories, I have proven that "Absolute Self-Explanation"—a prerequisite for many current superalignment strategies—is a mathematical impossibility for agentic systems. This research identifies structural failure points in AI architecture that empirical testing cannot catch. I am seeking $15,000 for a 3-month sprint to finalize the Agda formalization of these proofs and publish a machine-verifiable "Axiomatic Audit" for frontier AI labs.

What are this project's goals?

How will this funding be used?

Why is this high-impact? Current safety efforts are "patching holes" in a boat. My research proves that the hull itself has a logical limit. By defining the Terminal Boundary, I help the ecosystem avoid a trillion-dollar "catastrophic fail" caused by trying to scale systems past their logical safety capacity.

What are this project's goals? How will you achieve them?

  1. Machine Verification: Translate the categorical proofs (Yoneda-theoretic naturality failure, Lawvere Fixed-Point obstructions) into Agda to provide a mathematically certain "No-Go Theorem" for AI Safety.

  2. Define the "Safety Ceiling": Create a formal framework for labs (OpenAI, Anthropic) to identify which alignment goals are physically/logically impossible versus which are engineering challenges.

  3. The Human-AI "Missing Link": Develop a follow-up framework for "Open-Boundary Alignment," which models the missing logical connection between human intent and AI autonomy.

How will this funding be used?


Stipend ($12,000): To support 3 months of full-time research and formalization, preventing my exit from the field due to financial constraints.

  • Compute & Verification Tools ($2,000): For formal verification overhead and library development.

  • Open-Source Publication ($1,000): To ensure all proofs and Agda libraries are publicly available for the AI Safety community.

  • Why is this high-impact? Current safety efforts are "patching holes" in a boat. My research proves that the hull itself has a logical limit. By defining the Terminal Boundary, I help the ecosystem avoid a trillion-dollar "catastrophic fail" caused by trying to scale systems past their logical safety capacity.

Who is on your team? What's your track record on similar projects?


I am the sole principal investigator, operating as an independent researcher for 6 years. My track record is defined by high-conviction, self-funded deep work in the categorical foundations of AI safety.

  • Project Evolution: Over the last 6 years, I have moved from theoretical abstractions to the development of the Terminal Boundary Systems (TBS) framework.

  • Deliverables: I have produced two core technical papers ("Terminal Boundary Systems" and "The ASE Impossibility Theorem") and am currently developing a machine-verifiable formalization in Agda.

  • Execution: Operating without institutional support for 6 years demonstrates a high level of research discipline, resourcefulness, and a long-term commitment to solving the most difficult 'Safety Ceiling' problems in AI."

What are the most likely causes and outcomes if this project fails?

Answer: Likely Causes of Project Failure:

  • Formalization Bottleneck: The Agda formalization of Symmetric Monoidal Closed Categories is highly complex. Failure could occur if the translation from category theory to machine-verified code hits a 'complexity wall' that exceeds the current 3-month sprint timeline.

  • Conceptual Friction: The AI safety community may struggle to adopt a 'structural limit' approach over the current 'empirical testing' paradigm.

Likely Outcomes of Project Failure:

  • Field Risk: Without a proven 'Safety Ceiling,' labs will continue to pursue Absolute Self-Explanation, a goal my theory suggests is mathematically impossible. This leads to a false sense of security in AI alignment.

  • Catastrophic Failure: If agentic systems are deployed without acknowledging these structural boundaries, we risk Modal Collapse—where an AI's internal logic deviates from human reality in an unobservable, uncorrectable way.

  • Personal Risk: My exit from the field. After 6 years of self-funding, a lack of institutional support would mean the permanent loss of this specific mathematical early-warning system for the safety community."

How much money have you raised in the last 12 months, and from where?


Answer: "In the last 12 months, I have raised $0 in external funding. The project has been 100% self-funded through my own personal resources and 6 years of dedicated research labor.

I have reached a 'critical mass' where the theoretical work is complete, but the computational formalization (Agda) requires dedicated runway that my personal resources can no longer sustain. I am seeking this grant to transition from an 'Independent Explorer' to a 'Funded Developer' of safety-critical formal tools."

Comments1OffersSimilar7
🐰

Avinash A

2 days ago

"Hi @evhub and @austin — I’d appreciate a technical 'sanity check' on this project.

My core claim is that 'Absolute Self-Explanation' (ASE) is a mathematical impossibility for agentic systems, which I've modeled as a naturality failure at the terminal boundary within Symmetric Monoidal Closed Categories. I am currently formalizing this in Agda to prove that certain superalignment goals are structurally unreachable.

Given your work on deceptive alignment and agent foundations, I'd value your perspective on whether machine-verifying these 'No-Go Theorems' is a high-priority bottleneck for the field. I've self-funded for 6 years and am now seeking a 3-month sprint to finalize the Agda code. Papers attached in the description."