You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Most AI today is great at following instructions, but it struggles with "why" we make certain moral choices. Project Virtue Engine (VE) is my attempt to fix that. I’ve built a local system that is already 18% better at moral reasoning than leading frontier models like Claude, but right now, it’s running on obsolete low end consumer hardware that makes testing a nightmare.
To help it learn, I’m also building "The Long Road," the first in a series of story-driven games where players' choices provide the "real-world" data the engine needs to learn. Think of it as a way to turn human stories into a compass for safer AI by teaching them to reason through high tension moral scenarios instead of virtue signaling their responses.
I want to stop guessing if an AI is "aligned" and start measuring it using actual human decisions.
The goal of Project Virtue Engine (VE) is to turn human moral intuition into high-quality training data for AI safety. We aren't just looking for what people decide, but why they decide it.
Goal 1: Open the Data Superhighway
I have already built a working engine that beats frontier models by 18% in moral reasoning. However, my current low-end consumer grade hardware is a bottleneck, making each test a 40-minute wait.
How: I will deploy a professional-grade AI workstation (RTX 4090). This turns that "dirt road" into a superhighway, allowing me to run tests in under 3 seconds. This speed is what allows for the thousands of iterations needed to reach professional alignment standards.
Goal 2: Capture the "Why" through Targeted Game Design
Unlike standard narrative games, "The Long Road" was built from the ground up as a data generator. It features a unique game mechanic specifically designed to capture the reasoning behind player decisions—the exact type of data AI currently lacks.
How: As players engage with this mechanic, the game generates a high-fidelity stream of "Intention Data." The Virtue Engine then scores and categorizes this, creating a map of human reasoning that can be used to align AI models more effectively.
Goal 3: Lead with a "Safety-First" Development Moat
If a system that understands human reasoning is rushed or poorly handled, it risks being used for manipulation rather than safety.
How: I am building the "Safe Version" of this technology in a private, local lab. Working on air-gapped hardware ensures that our proprietary reasoning data and orchestration logic remain secure, preventing others from leaping to market with an unaligned or "dark" version of this tool.
I’m keeping things simple: this funding is strictly for the tools I need to move at professional speed.
$4,550 for the hardware: A high-performance RTX 4090 Workstation. This is the "infrastructure expansion" that makes the 3-second testing possible. A professional grade workstation will allow me to pursue this project in mere months instead of years. And when it comes to alignment solutions. We need them ASAP.
$450 for Logistics: State sales tax and insured shipping to get the gear safely to my door.
$2,400 for the Benchmark: A year of access to top-tier AI models (like Claude Max) so I can keep comparing my local engine's performance against the best in the world. And it will be utilized as a necessary tool to assist in the games i'm building to generate the very data we need.
Total: $7400
I’m an independent researcher with a career background in precision metrology. I specialize in high-tolerance measurement—making sure complex systems and assemblies work exactly as intended. I’m applying that same rigor to AI "Virtue." I have already successfully built the core VE logic, which currently hits 83% accuracy in local testing, (compared to Claude 4.6 which scored a 65%). Also the branching logic / scripts for the 1st game is already in late pre-production. Expecting full deployment by the end of the year. With outlines for 3 future games already written as well.
1) Low player base - The project fully deploys as designed but doesn't find a large enough player base to generate the needed data. My solution is I'm working hard on making the story as compelling as possible. Drawing heavily on my inspiration of the thought provoking and intense narratives found in the Telltale games.
2) Security Risks: If someone else tries to copy this and rush it to market, they might skip the safety steps I have spent months working on already. That's why I'm keeping everything on local, private air gapped hardware.
3) Flat out failure to produce the working game title - The heart of the engine is already built. If for some reason the game fails to reach a critical user base or unforeseen events get in the way of development. My next step is to ensure the safety of the design and place it in the hands of a vetted and trusted organization who can use this engine design safely and for the betterment of humanity.
$0. I’ve paid for everything myself so far. This grant would be the first time I’ve reached out for help to take this to the next level.