Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

AIVA OS: Causal Intelligence for Medicine

Science & technologyAI governanceGlobal health & development
🥭

ProposalGrant
Closes February 1st, 2026
$0raised
$1,000minimum funding
$500,000funding goal

Offer to donate

13 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

AIVA OS is a causal intelligence operating system for medicine that reconstructs a unified, longitudinal patient state from fragmented data, identifies upstream causal drivers, and runs in silico counterfactual simulations to rank interventions before clinicians act. It outputs deterministic, reproducible decision artifacts (not just narratives) so care teams can test, compare, and safely execute plans for chronic disease and longevity management across time and silos.

What are this project's goals? How will you achieve them?

AIVA OS is the causal intelligence layer for medicine: a control layer that computes what is causative in a patient, not just what is measurable, and drives decisions that are repeatable, testable, and safe under fragmented real world data. The project’s goal is to move medicine from documentation and correlation to state estimation and causal control. Concretely, AIVA OS reconstructs a coherent patient state across time and silos, identifies upstream causal drivers behind chronic disease and aging trajectories, runs in silico counterfactuals to compare intervention paths, and outputs decisions that are reproducible rather than narrative based. If this layer exists as infrastructure, longevity and chronic disease management stop being biomarker optimization theater and become an engineering discipline with closed loop iteration.

This is already engineered as a deterministic patient state and causal simulation stack that ingests fragmented longitudinal data, compiles it into a governed state representation, and uses causal modeling plus counterfactual rollouts to score interventions. The work now is execution at scale: hardening the platform for repeated deployments, expanding integrations with real clinical data sources, and running more live cohorts to convert the capability into an operational standard. Funding primarily supports productization and rollout, including deployment infrastructure, partner onboarding, and the clinical operations required to keep validating outcomes and safety as the system is used on more patients and across more sites.

How will this funding be used?

Funding will be used to scale what already works into a repeatable, deployable product, and to expand clinical validation and usage. Concretely:

  • Deployment engineering and integrations: productionizing data ingestion across real hospital workflows (EHR feeds, labs, imaging reports, notes), reliability hardening, security/compliance, and monitoring so deployments can run continuously, not as one off pilots.

  • Cohort execution and clinical operations: staffing and running additional live cohorts with partners (onboarding, protocol review loops, outcomes tracking), and building the operational playbook to roll out faster across sites.

  • Compute and simulation capacity: infrastructure for large scale in silico rollouts, model runs, and reproducible replay at higher throughput.

  • Product and interface: turning the system into an operator grade workflow (patient view, longitudinal state view, simulation comparison, decision outputs) so clinicians and teams can use it with minimal friction.

  • Evidence and governance artifacts: packaging outputs into institution friendly artifacts (traceable decision outputs, safety controls, confidence reporting) that support adoption and procurement.


Who is on your team? What's your track record on similar projects?

We’re a small, technical founding team. I’m Aditya Singh, founder of AIVA OS, and I lead the product and core system architecture. My co-founder Henna Advani is a Cambridge researcher who was originally on a traditional academic PhD path, saw the AIVA prototype early, and then joined full-time after we went through the Palantir Fellowship (in SF) and committed to building this as a deep tech venture. We’re supported by a tight set of senior technical and clinical collaborators from our partner network, and we bring in domain experts as needed for specific cohorts and deployments.

What are the most likely causes and outcomes if this project fails?

  • Stalls as a high-value R&D system rather than a deployed standard: It may remain a powerful internal engine used by a few partners without becoming an institution-wide default.

  • Gets absorbed into a larger platform as a feature, not a category: A payer, provider platform, or healthtech incumbent may integrate parts of it, capturing value without AIVA becoming the owning layer.

  • Becomes a niche tool for specific cohorts instead of a general causal control layer: Strong results in limited domains, but not broad enough deployment to build the compounding dataset and platform advantage.

  • Runs out of runway before distribution locks in: The most realistic failure mode is not “tech doesn’t work,” but “we didn’t scale deployment fast enough to turn proof into inevitability.”


How much money have you raised in the last 12 months, and from where?

We have raised a small pre seed SAFE in the last 12 months from angels and an institutional anchor. We are not disclosing the full cap table publicly at this stage, but can share details in diligence.

CommentsOffersSimilar3
jmedeirosdafonseca avatar

João Medeiros da Fonseca

Elo Clínico:

Phenomenological Fine-tuning for Medical AI Alignment

Science & technologyTechnical AI safetyGlobal health & development
1
0
$0 / $50K
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
jlssapieninstitute avatar

Dr. Jacob Livingston Slosser

Do humans maintain independence in normative decision-making when using AI?

Help get the Sapien Institute off the ground

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised