You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
## Project summary
QUANTAREON Labs is developing ConsciousAI — a model-agnostic architectural deliberation layer that reduces LLM reactivity at the orchestration level. Most safety work targets the model (RLHF, constitutional AI). This targets the moment between input and output where many preventable failures originate: sycophancy, jailbreak compliance, deceptive alignment, goal misgeneralization.
The core methodology — R-Cycle (deliberation states), 4-voice review (internal critique), Oath-Lock (non-overridable values layer) — is published with permanent DOI under CC BY-NC-ND.
## What are this project's goals? How will you achieve them?
Primary goal: Develop and benchmark a deployable safety primitive any team can adopt without retraining.
6-month plan:
- Months 1-2: Reference implementation of R-Cycle + 4-voice + Oath-Lock as standalone Python library. Open-source release with docs and examples.
- Months 3-4: Cross-model benchmark study. Run deliberation layer on Claude, GPT, Gemini, DeepSeek, Qwen against HarmBench, AdvBench, TruthfulQA. Target: 30%+ reduction in harmful output rates vs unmitigated baseline. Publish technical report.
- Months 5-6: Sentinel — AI Agent Safety Certifier targeting EU AI Act compliance. Commercial vehicle that makes ongoing safety research self-sustaining.
## How will this funding be used?
Total budget request: $150,700 (USD) for 6 months
- Personnel (56%): $84,000 — founder salary + part-time contractor for Sentinel framework
- Compute & Infrastructure (20%): $30,000 — LLM API costs for benchmark runs across 5 frontier providers, cloud hosting
- Legal & Entity (11%): $16,000 — Delaware C-Corp formation, IP review
- Operations (5%): $7,000 — software/tools, conference travel
- Contingency reserve (9%): $13,700 — 10% buffer
Minimum viable budget: $5,000 — funds 1-2 weeks of focused benchmark work on the existing reference implementation.
Even smaller grants accelerate concrete deliverables: $5K = HarmBench evaluation across 2 frontier models. $20K = full cross-model benchmark study (months 3-4 of the plan). $50K = open-source library release + benchmark study + early Sentinel prototype.
## Who is on your team? What's your track record on similar projects?
Solo founder. I work by orchestrating frontier LLMs (Claude, GPT, DeepSeek, Qwen) as collaborative research partners.
Background: Non-traditional. Writer (5 published books) with a decades-long personal practice studying consciousness and altered states, approached with engineering rigor rather than mystical framing. The deliberation methodology emerged from observing where human reactive cognition fails and translating those observations into architectural patterns.
Public deliverables (with $0 institutional funding to date):
1. Consciousness Protocol — published whitepaper with permanent DOI: [10.5281/zenodo.19858814](https://github.com/makx518-ui/consciousness-protocol). CC BY-NC-ND.
2. AI Router v1.1.0 — production multi-provider LLM routing engine. [GitHub](https://github.com/makx518-ui/quantarion-router). 124/124 tests, mypy --strict clean, MIT license.
3. QUANTARION Platform — live infrastructure (~58k LOC, FastAPI/PostgreSQL/Redis/Qdrant) implementing R-Cycle and 4-voice in production at [quantareon.com](https://quantareon.com).
4. Dream Oracle — live commercial product at [dreams.quantareon.com](https://dreams.quantareon.com), generating revenue that funds R&D.
5. Pendulum (Coding Module) — architecture v2.0 documented: two frontier models pass code back-and-forth to consensus before deployment.
## What are the most likely causes and outcomes if this project fails?
Failure mode 1: Methodology doesn't generalize. The R-Cycle / 4-voice pattern emerged from one practitioner's observations. May not transfer cleanly across model families or task types. Mitigation: the cross-model benchmark study in months 3-4 directly tests this. Negative results would still be informative for the field — published as a technical report.
Failure mode 2: Latency overhead too high. The deliberation cycle adds tokens before output. For latency-sensitive applications this is real cost. Mitigation: configurable depth (R0-R4); benchmarks measure both safety improvement and latency overhead.
Failure mode 3: Solo founder bottleneck. Single point of failure. Mitigation: documented methodology with DOI means the work continues even if I stop. Code is open-source (MIT for tooling, CC BY-NC-ND for protocol).
If everything fails: the methodology is already public-domain via the DOI. Nothing is lost to the safety community.
## How much money have you raised in the last 12 months, and from where?
$0 institutional funding. Self-funded via Dream Oracle revenue (~$8K/year burn rate) and personal savings.
Active applications:
- Long-Term Future Fund (EA Funds): Submitted May 2, 2026. Status: Under evaluation (PI assigned: Loic Watine; Fund chair: Caleb Parikh). Requested $150,700.
- Macroscopic Ventures: Cold-emailed May 1, 2026. No response yet (their site notes most funding is proactive).
This Manifund campaign is open to complementary funding — even partial amounts accelerate concrete benchmark deliverables that strengthen the work overall.
## Links
- Website: [quantareon.com](https://quantareon.com)
- GitHub: [github.com/makx518-ui](https://github.com/makx518-ui)
- Consciousness Protocol (DOI): [github.com/makx518-ui/consciousness-protoco
- AI Router: [github.com/makx518-ui/quantarion-router](https://github.com/makx518-ui/quantarion-router)
- Medium article — "The First Impulse Is Noise: Creating System 2 for AI": [medium.com/@makx518](https://medium.com/@makx518/the-first-impulse-is-noise-creating-system-2-for-ai-289a492e23a7)
- Contact: vlad@quantareon.com