Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

Operating Capital for AI Safety Evaluation Infrastructure

Technical AI safetyAI governanceBiosecurity
chriscanal avatar

Chris Canal

ProposalGrant
Closes November 8th, 2025
$400,000raised
$400,000minimum funding
$400,000funding goal
Fully funded and not currently accepting donations.

Project summary

Equistamp provides specialized engineering teams to AI safety organizations requiring rapid deployment of talent to meet critical publication, policy, and contract deadlines. We've built evaluation infrastructure for METR (HCAST, RE-Bench), UK AISI (Control Arena), Redwood Research (Linux Bench), and Daniel Kang's lab (CVE Bench) to name a few notable examples.

We're seeking $400k USD in operating capital to eliminate cash flow constraints that currently limit our ability to serve the AI safety community's growing evaluation needs.

Note on Structure: Equistamp is a Delaware C Corp (for-profit), making this a fiscal sponsorship arrangement. Funds donated to Manifund will be invested in Equistamp through a SAFE agreement, ensuring proper oversight while maintaining our sustainable business model.

What are this project's goals? How will you achieve them?

Primary Goal: Eliminate operating capital constraints that prevent us from accepting high-priority AI safety evaluation projects.

The Problem: AI safety organizations design sophisticated evaluations to predict future AI risks, but implementing these evaluations requires dozens of specialized engineers who must be onboarded, trained, and supervised for 2-5 month periods. After project completion, there's typically no work until the next evaluation cycle, creating inefficiency.

Our Solution: We maintain a trained taskforce specializing in widely-used evaluation stacks (primarily Meridian Lab's Inspect), deploying seamlessly between projects at near-zero transition cost. This enables researchers to build experiments and publish findings as quickly as possible.

Current Constraint: Our clients operate on Net 30 payment terms, while we pay staff weekly. The EU Commission pays every 6 months. With only $192K in operating capital, we're limited to $192K in monthly salary expenses, forcing us to decline projects despite strong demand.

How We'll Achieve This:

  • Minimum funding ($400K): Maintain current service capacity and meet existing client demand without declining projects

  • Deploy funds immediately to address current constraints and scale capacity

How will this funding be used?

This funding addresses operating capital (cash flow) rather than revenue. Our business model is sustainable—clients pay us to build their evaluations and infrastructure. The challenge is timing: we must pay engineers weekly while waiting 30-180 days for client payments.

Budget Allocation:

  • 100% Operating Capital: Bridging cash flow gaps between paying contractors weekly and receiving client payments on Net 30 to Net 180 terms

  • No overhead for new hires: Funds go directly to contractor salaries, enabling us to accept more projects

  • Immediate deployment: Funds deployed as soon as approved to accept and deploy personnel to pending projects

Detailed budget available upon request. 

Who is on your team? What's your track record on similar projects?

Leadership:

  • Christopher Canal (CEO): Leading business operations and client relationships

  • Daniel O'Connell (CTO): Technical leadership and evaluation stack expertise

Track Record: We've delivered critical infrastructure for leading AI safety organizations:

  • METR: HCAST, Uplift, and RE-Bench development

  • UK AISI: Control Arena Kubernetes configuration, multiple benchmark migrations, bug fixes for Control Arena and Inspect Framework

  • Redwood Research: Linux Bench

  • Daniel Kang's lab at UIUC: CVE Bench

  • EU Commission: Lead applicant coordinating consortium including METR, Epoch AI, Transluce, Arcadia Impact, CARMA, Apart Research, and BERI

Key Differentiator: We are one of the few profitable AI safety-focused companies, demonstrating sustainable operations while serving mission-critical needs.

References available from: David Rein (Redwood Research), Kit Harris (METR), Sami Jawhar, Tyler Tracy, Jasmine Wang, Daniel Kang, Milan Griffes

What are the most likely causes and outcomes if this project fails?

Failure Scenarios:

  1. Insufficient funding: Without adequate operating capital, we continue declining high-priority AI safety projects, slowing critical evaluation work across the ecosystem

  2. Cash flow crisis: If we accept projects beyond our capital capacity, we risk inability to pay contractors, damaging relationships and our reputation in the AI safety community.

  3. Missed EU Commission opportunity: Without the full $400k, we will struggle to deploy personnel for EU Commission contracts as quickly as desired starting January 2025, forcing other organizations to coordinate the consortium less efficiently.

  4. Competition from better-capitalized firms: Traditional consulting firms with deeper pockets may enter the AI safety evaluation space, potentially prioritizing profit over safety-focused mission alignment.

Outcome if project fails: AI safety organizations face slower evaluation development cycles, delayed publications, and reduced capacity to assess risks from transformative AI systems. 

Mitigation: We're simultaneously pursuing traditional financing (banks, private investors) but grant funding provides more favorable terms and better mission alignment.

How much money have you raised in the last 12 months, and from where?

Total raised in last 12 months: $0 in external funding

Current operating capital: $192K (accumulated from client revenue)

Revenue status: Profitable and sustainable through client contracts, but constrained by cash flow timing

Previous applications:

  • Applied for SBA loan → Denied (New Trump admin ownership requirements: CTO is UK citizen, recent policy requires 100% US ownership)

  • Currently in discussions with mission-aligned private investors

  • Sent in an Open Philanthropy application (pending): $200K-$1.3M

Note: We are profitable, generating revenue from AI safety organizations who pay us to build evaluations. This request specifically addresses operating capital (cash flow timing) rather than operational losses.

CommentsOffers1Similar6
AmritanshuPrasad avatar

Amritanshu Prasad

Suav Tech, an AI Safety evals for-profit

General Support for an AI Safety evals for-profit

Technical AI safetyAI governanceGlobal catastrophic risks
4
0
$0 raised
caip avatar

Center for AI Policy

Support CAIP’s 3-month project on reducing chem-bio AI risk

AI governanceBiosecurityGlobal catastrophic risks
7
0
$0 raised
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
Allisondman avatar

Allison Duettmann

Increasing the funding distributed by Foresight Insitute's AI safety grants

focused on 1. bci and wbe for safe ai, 2. cryptography and security for safe ai, and 3. safe multipolar ai

Science & technologyTechnical AI safetyAI governance
4
0
$0 raised
peterwildeford avatar

Peter Wildeford

AI Policy work @ IAPS

AI governance
8
3
$10.1K raised
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $125K