Spartacus.app is alive and kicking. Time for a long-overdue update.
The last post on this Substack was the Nov/Dec 2024 status report. After that, I stopped posting monthly updates. This will be Part 1 of three posts I’ll publish over the next week or so. It will catch everyone up on what happened, what’s changed, and what’s coming up next.
Part 2 will make the case for why Spartacus.app matters even more now than when we started, and dig into the unique obstacles to coordinating behavior at scale. Part 3 will detail our near-term roadmap.
TL;DR
For those who need a refresher, spartacus.app is a platform that uses conditional commitments, aka Assurance Contracts (“I’ll do this if you do it too”), to address coordination and collective action problems. We were awarded an ACX grant in 2024 and currently operate as a non-profit under our fiscal sponsor, Manifund.org
Spartacus is alive, sharper than before, and focused on a specific part of the problem space (AI safety) where our platform can enable enormous positive outcomes. Brand-new team members are more capable and more senior than ever. However, we’re running low on operating cash and have about 8–10 weeks of runway. We’ve applied to the Survival and Flourishing Fund’s 2026 S-Process Grant Round. Nine new organizations have written letters of intent to pilot the platform if we’re funded, with more expected to sign on.
We have our work cut out for us and would be grateful for assistance (see “How you can help”).
What happened in 2025
It’s been a long time since the last update, and I have a duty to be transparent with current and past supporters.
The following setbacks are enough to kill many early-stage startups. We survived because I refused to quit.
Spartacus has always been me (Jordan Braunstein) plus one technical collaborator, on a ramen budget. It was a fragile setup. We’re trying to succeed in the “social impact” space, where easy commercialization has to be resisted in favor of harder-to-measure forms of social value. For example, it’s far easier to price value in dollars when the goal itself is measured in dollars, as crowdfunding does, than to appraise the shared value of collective behavioral change in non-monetary outcomes. In 2025, the known risks of this setup played out badly in two compounding ways.
The first was engineering disruption. My original collaborator, Tetra Jones, joined me from the time of the ACX 2024 grant award through MVP completion; we split the funds in half, with her share pre-funding the compensation we budgeted for her technical contribution. After launch, Tetra left in early 2025 to pursue other opportunities. The parting was amicable. I sought a replacement, whom I also found through the ACX community. He also joined as a freelancer and worked with me for nine months, but then disappeared mid-sprint in January 2026 with no warning or handoff.
Two critical departures in a row, with no backup, on a project where technical support and iterative development were vital. Each instance brought operations to a near-halt; the second caused roughly three months of derailment in the middle of a very promising eval.
…Which might not have been a serious problem, except that we hadn’t yet found “product market fit.” The initial approach we started with, justified by early feedback on the prototype and user interviews, stalled out of the gate, and subsequent pivots failed to gain traction.
I spent late 2024 and much of 2025 trying to bootstrap pilot programs with organized labor and various political and social non-profit organizations. I thought a few key labels would help us springboard into growth and create a flywheel.
Without insider backing to overcome the typical risk aversion to early adoption of an unproven platform, targeting the nonprofit and activist space writ large was a cul-de-sac. The trust and utility thresholds for adoption in these verticals are high. Budgets are tight, and partnerships require substantial relationship-building, buy-in from multiple stakeholders, bureaucratic approval processes, and onerous compliance requirements. We underestimated the friction, red tape, and committee culture we encountered.
Time and again, a compelling demo and a hyper-tailored pitch didn’t clear the bar, no matter how cleanly our mechanisms solved a known coordination problem. In parallel, we ran several grassroots, ad hoc “mini-pilots” but rarely met a success threshold or measurable impact. We kept running into objections stemming from UX/UI shortcomings we couldn’t resolve quickly. We encountered recurring structural barriers that boiled down to a pattern: the most impactful use cases were the hardest to address, due to narrow “windows of opportunity” and the inscrutability of localized needs from the outside.
The value of coordination can be highly context- and timing-dependent, and we lacked a reliable way to predict or target this in advance. Ideally, users would come to us when circumstances were primed in ways only they could recognize, but that would require a level of brand awareness we couldn’t achieve with our modest resources.
The engagements that worked demonstrated where the design mechanisms we built showed promise and helped shape our positioning and emphasis. Finding product-market fit is its own problem, separate from the engineering and budget story, and a harder one. I’ll dig into the specific lessons in Part 2.
We’ve internalized both general lessons. The engineering risk is materially reduced by the new team (more on them below). The targeting risk is reduced by focusing on the AI-safety coordination niche, where partner organizations already deeply understand the underlying game theory, our reputation is established, and where securing introductions within the community is easier.
We’re further mitigating risk by spinning off a version of the product optimized for another novel use case that can generate fast, recurring revenue, which can then flow back into the main entity as a sustainability measure (more on this in part 3).
This doesn’t guarantee the next year will go well, but it does mean the structural conditions for execution are materially different and improved.
The reset
I moved to New York in April 2025 to be closer to family, which turned out to be lucky in ways I hadn’t planned. I recently joined Collider (https://collider.nyc/), a Manhattan workspace for AI safety and other high-impact professionals. I plugged into the local AI safety and EA scene: EANYC, Coalition for Good Futures, Dear Crisis, and a handful of others I’ll talk about across this series. Proximity to people doing aligned work matters a lot, and this is about as well-situated as one can be outside of San Francisco or Berkeley.
In March and April of this year, I brought on two new collaborators who substantially upgraded the team’s capacity and competence.
Aster Langhi is our new technical lead. They’re a Staff+ engineer with prior tenure at Google and Adobe and a decade of independent startup work.
Clarina Manuel is joining as a technical fellow. She’s a student with deep applied ML and full-stack experience, including production work at C1X and computer-vision research at the USC Viterbi iLab.
Aster brings seniority, a strong track record, and the ability to execute, while Clarina brings horsepower and flexibility that the prior arrangement lacked. While nothing is guaranteed, the odds of rapid, material progress have substantially increased.
SFF grant and LOIs
I submitted Spartacus’s application to SFF’s 2026 S-Process Grant Round on April 21, under the Freedom and Fairness tracks. The application is anchored in a clear repositioning: Spartacus is conditional-commitment infrastructure for AI safety coordination problems first, with consumer-facing applications serving as a sustainability layer rather than a primary focus.
Our base request is $50K. Our ambitious ask is $200–300K, which would fund 9–12 months of focused execution on the relaunched platform. The minimum keeps us alive and shipping at a reduced scope.
In anticipation of the application, we solicited a portfolio of letters of intent. Inside a 2-week sprint of focused outreach, nine organizations sent non-binding LOIs committing to pilot the platform conditional on funding:
EANYC: AI-safety pledges and conditional, reciprocal pacts
Coalition for Good Futures: cross-ideological AI-safety statement coordination
Synthetix Institute: distributed research-coordination pilots
AI Legislation Tracker (Matthew Taber): activation layer over tracked bills
Dear Crisis: converting salon-generated interest into coordinated next steps
UAW Local 2710 (via Andrew Souther, Columbia): academic-labor organizing
LUCITÀ: creative-industry coordination tied to their Creatives on AI report
Top Dog College Admissions: college admission use case development
The Mother Tree: critical mass building for community programming
Plus endorsements from Scott Alexander (who recommended the original ACX 2024 grant) and Erik Passoja of Protect Digital Identity (attesting to the planning work behind March 2 Testify).
For most of 2025, I’d been told that no one in organizational responsibility wanted to be the first to work with an obscure startup. Clearly, some of this new interest stems from the AI-safety landscape shifting, but it’s also due to the refinement of our positioning, value proposition, and the incorporation of lessons learned.
More on both in Part 2.
Near term
Through summer, the priorities are:
Extend the runway. SFF concludes in the fall, but grants and matching pledges can be awarded earlier. We’re pursuing alternative funding sources if SFF doesn’t come through.
Aster’s onboarding. They’re already in the codebase. A full refactoring of the platform is underway ahead of a soft relaunch scheduled for early summer.
Convert LOIs into true engagements, even at reduced scope where possible.
Continue growing the LOI portfolio in parallel.
I’ll also be writing more often. Expect monthly updates and ad hoc commentary on current events related to collective action and coordination for AI safety. The silence is over.
How you can help
If you fund AI-safety-relevant non-profit organizations and projects, reach me at jordan@spartacus.app. I’ll share the full application package directly with anyone evaluating us for a grant or matching donation. The most pressing need is securing enough funds to keep everything afloat for the next 6-12 months.
If you have a coordination problem in your own work, particularly in AI safety, governance, research, or organizing, and you’ve been wondering whether threshold-commitment infrastructure would unblock it, let’s talk. I’m prioritizing pilots with sharply defined problems and accessible target populations.
If you have feedback, criticism, or a pointer to a different kind of problem than the ones I’ve been chasing, I want to hear it. Some of the most helpful input has been direct, blunt assessments or debates from people with the context and experience I lack. Don’t be shy!