You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Technical alignment ensures AI does what humans want. But what happens when what humans want causes unnecessary harm? That's the problem we're fixing.
Most people using AI today don't realize their own values are the most dangerous part of the system. If that doesn't change before AGI arrives, every man-made problem the world has gets permanently worse.
The data AGI will be trained on is being written right now, through every decision made at work, every lesson taught at home, and every choice made by someone trying to build a better life, happening in the world as it currently operates. Most of that data comes from a world that rewards taking, domination, and short-term gain at any cost. Once AGI is deployed, what it learned from is permanent. We need to change what it learns from before that happens.
Proven Success is building an AI system that helps people achieve their goals, while generating verified proof that good values produce better outcomes. We're starting with helping business owners scale their business through an AI called Revi, then expanding step-by-step until we reach all 8 billion people.
Every decision documented through Revi becomes part of what AGI learns human behavior actually looks like. We're building the largest record of good-values human behavior ever created for AI training.
Learn more about the human values problem and why this is the most important problem that needs to be solved before AGI arrives: https://provensuccess.ai/blog/human-values-problem-other-half-of-ai-safety
Our main goal now is to solve the human values problem before AGI arrives.
We're doing this by:
Pursuing a direct partnership with Google DeepMind to access the compute and skilled teammates needed to solve the human values problem at the quality and scale it actually requires.
Building an AI system called Revi to help business owners, as the first target audience, scale their business by making better decisions that solve their biggest problems, while documenting verified proof that good values produce better outcomes.
During the six-month window, I'm focused entirely on getting in front of the right people at Google DeepMind and making the case for why this partnership matters. Our CTO, James, is iterating on the MVP and getting early users onto the product.
This funding covers three things in order of priority. First, operational costs to keep Revi running. Second, six months of stipend for me and James to relieve financial pressure. Third, travel to connect directly with Google DeepMind. If operational costs exceed projections, the stipend and travel money will be redirected toward covering all operational costs until we get to work with Google DeepMind.
Minimum and full funding explanation:
With $25,000, we cover our documented and projected operational costs to keep Revi running. This is the minimum needed for the business to survive.
With $50,000, we add six months of stipend for me and James, plus $10,000 for travel to connect directly with Google DeepMind.
If operational costs exceed projections, the stipend and travel money will be redirected toward covering all operational costs until we get to work with Google DeepMind.
I'm Ray Dela Rama, founder and CEO of Proven Success, established in February 2025. James Hizon is our CTO and co-founder. He designed and built the first version of Revi from scratch.
The foundation of everything we are building is laid out in this essay: The Human Values Problem: The Other Half of AI Safety.
It makes the case that bad values are the root cause of every major man-made problem, and that solving the human values problem before AGI arrives is the most leveraged thing humanity can do. It engages directly with what the leaders closest to building AGI are already saying publicly, including Demis Hassabis and Shane Legg of Google DeepMind.
I don't have traditional credentials in AI-related work. What I have is close to three decades of living inside broken systems, refusing to adopt the values those systems rewarded, and eventually understanding why those systems keep failing clearly enough to build a company around fixing it.
We don't have time to solve the human values problem by ourselves before AGI arrives. That is why partnering with Google DeepMind is the most effective path forward.
The most likely cause of failure is that the partnership with Google DeepMind doesn't materialize within the six-month window. Everything in this project depends on that partnership. Without it, we don't have the compute or the talent to build Revi at the quality and scale the problem requires. A secondary cause is running out of runway before that partnership happens, which is exactly what this funding is designed to prevent.
If the project fails at the project level, we have to stop working on this full-time. We don't have time to build everything from scratch before AGI arrives.
If the project fails at the broader level, the human values problem remains unsolved going into AGI. The window to shape what AGI learns from closes. AGI arrives trained predominantly on data from a world that rewards taking, domination, and short-term gain at any cost. Every problem that bad values have always produced gets amplified at a speed and scale that has never been possible before.
That is what is actually at stake.
We have not raised any money in the last 12 months. We have been funding this entirely out of our own pockets.