Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
3

Operating Capital for AI Safety Evaluation Infrastructure

Technical AI safetyAI governanceBiosecurity
chriscanal avatar

Chris Canal

ActiveGrant
$400,000raised
$400,000funding goal
Fully funded and not currently accepting donations.

Project summary

Equistamp provides specialized engineering teams to AI safety organizations requiring rapid deployment of talent to meet critical publication, policy, and contract deadlines. We've built evaluation infrastructure for METR (HCAST, RE-Bench), UK AISI (Control Arena), Redwood Research (Linux Bench), and Daniel Kang's lab (CVE Bench) to name a few notable examples.

We're seeking $400k USD in operating capital to eliminate cash flow constraints that currently limit our ability to serve the AI safety community's growing evaluation needs.

Note on Structure: Equistamp is a Delaware C Corp (for-profit), making this a fiscal sponsorship arrangement. Funds donated to Manifund will be invested in Equistamp through a SAFE agreement, ensuring proper oversight while maintaining our sustainable business model.

What are this project's goals? How will you achieve them?

Primary Goal: Eliminate operating capital constraints that prevent us from accepting high-priority AI safety evaluation projects.

The Problem: AI safety organizations design sophisticated evaluations to predict future AI risks, but implementing these evaluations requires dozens of specialized engineers who must be onboarded, trained, and supervised for 2-5 month periods. After project completion, there's typically no work until the next evaluation cycle, creating inefficiency.

Our Solution: We maintain a trained taskforce specializing in widely-used evaluation stacks (primarily Meridian Lab's Inspect), deploying seamlessly between projects at near-zero transition cost. This enables researchers to build experiments and publish findings as quickly as possible.

Current Constraint: Our clients operate on Net 30 payment terms, while we pay staff weekly. The EU Commission pays every 6 months. With only $192K in operating capital, we're limited to $192K in monthly salary expenses, forcing us to decline projects despite strong demand.

How We'll Achieve This:

  • Minimum funding ($400K): Maintain current service capacity and meet existing client demand without declining projects

  • Deploy funds immediately to address current constraints and scale capacity

How will this funding be used?

This funding addresses operating capital (cash flow) rather than revenue. Our business model is sustainable—clients pay us to build their evaluations and infrastructure. The challenge is timing: we must pay engineers weekly while waiting 30-180 days for client payments.

Budget Allocation:

  • 100% Operating Capital: Bridging cash flow gaps between paying contractors weekly and receiving client payments on Net 30 to Net 180 terms

  • No overhead for new hires: Funds go directly to contractor salaries, enabling us to accept more projects

  • Immediate deployment: Funds deployed as soon as approved to accept and deploy personnel to pending projects

Detailed budget available upon request. 

Who is on your team? What's your track record on similar projects?

Leadership:

  • Christopher Canal (CEO): Leading business operations and client relationships

  • Daniel O'Connell (CTO): Technical leadership and evaluation stack expertise

Track Record: We've delivered critical infrastructure for leading AI safety organizations:

  • METR: HCAST, Uplift, and RE-Bench development

  • UK AISI: Control Arena Kubernetes configuration, multiple benchmark migrations, bug fixes for Control Arena and Inspect Framework

  • Redwood Research: Linux Bench

  • Daniel Kang's lab at UIUC: CVE Bench

  • EU Commission: Lead applicant coordinating consortium including METR, Epoch AI, Transluce, Arcadia Impact, CARMA, Apart Research, and BERI

Key Differentiator: We are one of the few profitable AI safety-focused companies, demonstrating sustainable operations while serving mission-critical needs.

References available from: David Rein (Redwood Research), Kit Harris (METR), Sami Jawhar, Tyler Tracy, Jasmine Wang, Daniel Kang, Milan Griffes

What are the most likely causes and outcomes if this project fails?

Failure Scenarios:

  1. Insufficient funding: Without adequate operating capital, we continue declining high-priority AI safety projects, slowing critical evaluation work across the ecosystem

  2. Cash flow crisis: If we accept projects beyond our capital capacity, we risk inability to pay contractors, damaging relationships and our reputation in the AI safety community.

  3. Missed EU Commission opportunity: Without the full $400k, we will struggle to deploy personnel for EU Commission contracts as quickly as desired starting January 2025, forcing other organizations to coordinate the consortium less efficiently.

  4. Competition from better-capitalized firms: Traditional consulting firms with deeper pockets may enter the AI safety evaluation space, potentially prioritizing profit over safety-focused mission alignment.

Outcome if project fails: AI safety organizations face slower evaluation development cycles, delayed publications, and reduced capacity to assess risks from transformative AI systems. 

Mitigation: We're simultaneously pursuing traditional financing (banks, private investors) but grant funding provides more favorable terms and better mission alignment.

How much money have you raised in the last 12 months, and from where?

Total raised in last 12 months: $0 in external funding

Current operating capital: $192K (accumulated from client revenue)

Revenue status: Profitable and sustainable through client contracts, but constrained by cash flow timing

Previous applications:

  • Applied for SBA loan → Denied (New Trump admin ownership requirements: CTO is UK citizen, recent policy requires 100% US ownership)

  • Currently in discussions with mission-aligned private investors

  • Sent in an Open Philanthropy application (pending): $200K-$1.3M

Note: We are profitable, generating revenue from AI safety organizations who pay us to build evaluations. This request specifically addresses operating capital (cash flow timing) rather than operational losses.

Comments7Donations1Similar5
🌳

1

18 days ago

This is essentially a business loan disguised as a grant. Equistamp is profitable but wants free money instead of bank financing to bridge their receivables gap. Is this really what grants are for?

chriscanal avatar

Chris Canal

17 days ago

@sunghunkwag You are correct that we needed a loan. We applied for loans but were denied because we are not 100% american owned and the current US admin has made it impossible to get SBA loans if you have non american stock holders https://www.sba.gov/article/2025/03/06/administrator-loeffler-announces-sba-reforms-put-american-citizens-first . In our case my cofounder is a UK Citizen. We applied for private financing but most private banks in the US are following the administrations lead and denying loans if the companies are even 1% owned by a non US citizen. We actually much prefer a loan to giving up some control of the company, but these are the cards we were dealt. This grant is a SAFE ( https://www.startengine.com/blog/understanding-SAFEs-a-simple-agreement-for-future-equity-explained ) so we are giving Manifund equity in our company in return for capital that allows us to help more governments and AI Safety research orgs. All that being said, we hope to provide capital back to Manifund so that they can invest in other important AI Safety projects ASAP.

🌳

1

17 days ago

@chriscanal Thank you for providing the full context. Based on your clarification, my previous comment was founded on an incorrect assumption.

I apologize.

As someone who is fundamentally driven by the question "What is right?", my judgment of your motives, made without knowing the full facts of your situation (specifically the SBA policy and your co-founder's status), was, in itself, not right.

I was wrong to frame your funding as a simple "disguised loan" based on preference. The reality you described—that you were denied access to traditional financing due to arbitrary, nationalistic filters—is a critical piece of information I did not have.

This new information, however, only deepens my core critique of the system itself.

My original point was that the funding ecosystem filters for 'domesticated signals' (legible infrastructure) over 'ungovernable' R&D. Your story provides an even more stark example: the system is so dysfunctional that it applies bureaucratic, nationalistic filters to exclude even the proven, 'domesticated' projects like yours.

If a profitable, established company is forced to give up equity (via a SAFE) simply because of a co-founder's passport, it proves my point more strongly than I could have imagined: This ecosystem does not run on merit; it runs on arbitrary filters.

Thank you again for sharing your reality. It has clarified the true, and more severe, nature of the dysfunction we are all operating in.

🌳

1

17 days ago

@chriscanal Thank you for providing the full context. Based on your clarification, my previous comment was founded on an incorrect assumption.

I apologize.

As someone who is fundamentally driven by the question "What is right?", my judgment of your motives, made without knowing the full facts of your situation (specifically the SBA policy and your co-founder's status), was, in itself, not right.

I was wrong to frame your funding as a simple "disguised loan" based on preference. The reality you described—that you were denied access to traditional financing due to arbitrary, nationalistic filters—is a critical piece of information I did not have.

This new information, however, only deepens my core critique of the system itself.

My original point was that the funding ecosystem filters for 'domesticated signals' (legible infrastructure) over 'ungovernable' R&D. Your story provides an even more stark example: the system is so dysfunctional that it applies bureaucratic, nationalistic filters to exclude even the proven, 'domesticated' projects like yours.

If a profitable, established company is forced to give up equity (via a SAFE) simply because of a co-founder's passport, it proves my point more strongly than I could have imagined: This ecosystem does not run on merit; it runs on arbitrary filters.

Thank you again for sharing your reality. It has clarified the true, and more severe, nature of the dysfunction we are all operating in.

🌳

1

18 days ago

This is essentially a business loan disguised as a grant. Equistamp is profitable but wants free money instead of bank financing to bridge their receivables gap. Is this really what grants are for?

Austin avatar

Austin Chen

21 days ago

Approving this project. Equistamp has been doing work with the most prominent AI safety research orgs, has been operating profitably by providing their consulting services. After speaking with Chris and Jueyan, we're happy to help them solve their operating cash constraints.

I'm also broadly excited by this model of supporting aligned for-profit work. As Chris mentions, we're structuring this grant as a SAFE (Simple Agreement for Future Equity) investment with an $8m cap into Equistamp; proceeds (if any) will be returned to AISTOF's balance, allowing them to make further grants.

🌳

1

18 days ago

@Austin Here's the question: What's the right balance between scaling proven evaluation methods and funding architectural research that questions those methods' foundations?

Both matter. Evaluation infrastructure helps us measure current systems. But architectural research helps us build systems that won't need post-hoc safety measures because they're safe by construction.

The AI safety community benefits from both approaches:

  • Proven infrastructure (like what Equistamp provides) for immediate evaluation needs

  • Foundational research (like deterministic architectures) for long-term safety paradigms

Different timelines, different risk profiles, both necessary. Independent researchers contribute by exploring architectures that institutional labs might consider too uncertain or fundamental to prioritize.

That's the value of platforms like Manifund: supporting diverse approaches to the same critical mission. Some projects scale existing solutions. Others question whether those solutions address root causes.

Question for the community: How do we balance funding proven evaluation infrastructure against speculative but potentially transformative architectural research?