Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
🌳
🌳
1

@1

1

$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

1

Comments

Operating Capital for AI Safety Evaluation Infrastructure
🌳

1

17 days ago

@chriscanal Thank you for providing the full context. Based on your clarification, my previous comment was founded on an incorrect assumption.

I apologize.

As someone who is fundamentally driven by the question "What is right?", my judgment of your motives, made without knowing the full facts of your situation (specifically the SBA policy and your co-founder's status), was, in itself, not right.

I was wrong to frame your funding as a simple "disguised loan" based on preference. The reality you described—that you were denied access to traditional financing due to arbitrary, nationalistic filters—is a critical piece of information I did not have.

This new information, however, only deepens my core critique of the system itself.

My original point was that the funding ecosystem filters for 'domesticated signals' (legible infrastructure) over 'ungovernable' R&D. Your story provides an even more stark example: the system is so dysfunctional that it applies bureaucratic, nationalistic filters to exclude even the proven, 'domesticated' projects like yours.

If a profitable, established company is forced to give up equity (via a SAFE) simply because of a co-founder's passport, it proves my point more strongly than I could have imagined: This ecosystem does not run on merit; it runs on arbitrary filters.

Thank you again for sharing your reality. It has clarified the true, and more severe, nature of the dysfunction we are all operating in.

Operating Capital for AI Safety Evaluation Infrastructure
🌳

1

17 days ago

@chriscanal Thank you for providing the full context. Based on your clarification, my previous comment was founded on an incorrect assumption.

I apologize.

As someone who is fundamentally driven by the question "What is right?", my judgment of your motives, made without knowing the full facts of your situation (specifically the SBA policy and your co-founder's status), was, in itself, not right.

I was wrong to frame your funding as a simple "disguised loan" based on preference. The reality you described—that you were denied access to traditional financing due to arbitrary, nationalistic filters—is a critical piece of information I did not have.

This new information, however, only deepens my core critique of the system itself.

My original point was that the funding ecosystem filters for 'domesticated signals' (legible infrastructure) over 'ungovernable' R&D. Your story provides an even more stark example: the system is so dysfunctional that it applies bureaucratic, nationalistic filters to exclude even the proven, 'domesticated' projects like yours.

If a profitable, established company is forced to give up equity (via a SAFE) simply because of a co-founder's passport, it proves my point more strongly than I could have imagined: This ecosystem does not run on merit; it runs on arbitrary filters.

Thank you again for sharing your reality. It has clarified the true, and more severe, nature of the dysfunction we are all operating in.

Operating Capital for AI Safety Evaluation Infrastructure
🌳

1

18 days ago

This is essentially a business loan disguised as a grant. Equistamp is profitable but wants free money instead of bank financing to bridge their receivables gap. Is this really what grants are for?

Operating Capital for AI Safety Evaluation Infrastructure
🌳

1

18 days ago

This is essentially a business loan disguised as a grant. Equistamp is profitable but wants free money instead of bank financing to bridge their receivables gap. Is this really what grants are for?

Operating Capital for AI Safety Evaluation Infrastructure
🌳

1

18 days ago

@Austin Here's the question: What's the right balance between scaling proven evaluation methods and funding architectural research that questions those methods' foundations?

Both matter. Evaluation infrastructure helps us measure current systems. But architectural research helps us build systems that won't need post-hoc safety measures because they're safe by construction.

The AI safety community benefits from both approaches:

  • Proven infrastructure (like what Equistamp provides) for immediate evaluation needs

  • Foundational research (like deterministic architectures) for long-term safety paradigms

Different timelines, different risk profiles, both necessary. Independent researchers contribute by exploring architectures that institutional labs might consider too uncertain or fundamental to prioritize.

That's the value of platforms like Manifund: supporting diverse approaches to the same critical mission. Some projects scale existing solutions. Others question whether those solutions address root causes.

Question for the community: How do we balance funding proven evaluation infrastructure against speculative but potentially transformative architectural research?