A post-AGI world will face critical challenges:
How do we ensure people live meaningful lives when jobs are automated and traditional labor is no longer economically valuable?
How do we safeguard against collapse when increasingly powerful AI is deployed in misaligned incentive structures (eg. markets)?[1]
How can we distribute AGI-generated wealth in a way that benefits all of humanity?
While many socio-technical alignment solutions focus on either top-down regulatory measures, or bottom-up voluntary pledges, we believe there is a third way to address these problems: when market coordination leads to bad outcomes, we can augment or replace markets in a sector with more finely-aligned AI-based mechanisms.[2]
Building on our granular conception of values and meaning, the Meaning Alignment Institute (MAI) will build and trial an LLM coordinator that allocates resources and suggests trades based on what people would find most meaningful with a population of 200 people, verify this mechanism can beat their regular consumer spending at achieving this goal, and publish our findings in a paper.
Goals:
Develop AI-driven market alternatives that fulfill deeper human preferences better than regular markets.
Demonstrate that AI can facilitate and enable meaningful economic activity in a post-labor society, based on these deeper preferences (by comparing the spending of the AI to ordinary consumer spending).
Articulate an inspiring vision for post-AGI economies, through accompanying paper and media.
We will achieve this by:
Build a prototype LLM coordinator and run it with small groups of 12-20 people. This prototype will identify what kind of deeper preferences are unfulfilled in participants’ lives[3], and suggest collaborations and activities (rather than traditional "labor") that fulfills these things.
Run a larger test, with 200 participants. The AI would be supplied with a $20K budget, and in addition to the above, would also allocate these resources based on what deeper preferences are currently unfulfilled in the community.
Compare the system against participants' ordinary consumer spending.
Publish a paper on our methods and results, including specific insights on post-labor activities and AI-driven wealth distribution models.
The requested funding of $185,000 will be allocated as follows:
1. Inference compute: $5,000
2. Project lead (part-time, 6 months): $30,000
3. Engineer (full-time, 6 months): $60,000
4. Designer (part time, 3 month): $20,000
5. Budget for AI-driven allocator in large test (simulating AI-generated wealth): $20,000
6. Events & community engagement (focusing on post-labor activities): $25,000
7. Analysis of post-labor engagement and wealth distribution patterns: $15,000
8. Work on documentation & research publication: $10,000
Track record
Together with OpenAI, the Meaning Alignment Institute previously developed an alternative democratic process in which participants articulated their values in order to shape model behavior.
"We need this kind of innovation to make the interplay of human values actionable; the 'moral graph' is hugely promising." - Aviv Ovayada, Harvard's Berkman Klein
“Likely to make a big impact in open source models, as well, leading to safer models, and better democracies.” – Peter Wang (CEO Open Source AI Anaconda)
This process is outlined in their paper, “What are human values, and how do we align AI to them?”. They are in the process of fine-tuning models based on this approach.
“Defining an alignment target for general-purpose chatbots is really hard, and I doubt anyone has the final answer yet—but from now on I’ll be pointing to Moral Graph Elicitation as the best approach I’ve seen so far.” – David Dalrymple, ARIA
"Excellent and inspiring work. I particularly like the use of generated stories & the "wisdom upgrade" question to get at what is important about values." – Saffron Huang, Collective Intelligence Project & UK AI Safety Institute
"An exciting and promising research direction that can help us answer the important question of 'what should we align our models to?'" – Teddy Lee, OpenAI Collective Alignment
Team
Joe Edelman, Co-Founder at the Meaning Alignment Institute, has worked extensively on the logic of market failures, on why markets often lead to bad outcomes for human beings, and also on methods for gauging benefit and meaning across a population. He has also developed large-scale recommendation systems at eg. CouchSurfing.
Ellie Hain, Co-Founder at the Meaning Alignment Institute, has researched post-AGI social and market narratives for 8+ years, and worked on designing meaning-centric community experiences for the past 4 years:
“If you enjoy my moloch/winwin content, this is the best rendering of the problem & solution I’ve seen thus far.” Liv Boeree, AI and x-risk media creator, Win Win podcast.
“Ensuring AI goes well may be the most important problem of our time—possibly of all time. MAI goes straight to the heart of the problem, reaching into the depths of human nature to discover not just what we want to align AI with, but which new world is possible once AGI is here. I spent a night with Ellie and the team and left feeling inspired.” – Tim Urban, blogger, Wait but Why.
LLM Limitations in Identifying Meaningful Exchanges
Cause: The AI system struggles to accurately identify or suggest truly meaningful multi-way exchanges or activities.
Outcome: Proposed activities and resource allocations fail to fulfill deeper human needs or create genuine value for participants.
Impact: Undermines the project's goal of demonstrating AI's potential to facilitate meaningful post-labor activities and equitable resource distribution.
Failure to Translate Success into Systemic Change
Cause: Even if the experiment succeeds, existing market incentives and entrenched economic structures prove too resilient to change.
Outcome: The project's innovative approaches fail to gain traction beyond the experimental setting, unable to compete with or replace traditional market mechanisms.
Impact: Missed opportunity to implement meaningful economic reforms, potentially leaving societies unprepared for post-AGI economic challenges.
Problems at Larger Scales
Cause: The test pool of 200 participants doesn’t include challenges of bad actors, system-gamers, arbitrageurs, etc and these end up dooming the model at larger scales.
Outcome: Our experiment appears successful but larger-scale simulations or tests show that it can’t work at the scales where it’d be needed.
Impact: Results may not be generalizable to larger populations or real-world economic scenarios, limiting the project's applicability and credibility.
We are also seeking funding from SFF, and individual donors. Any additional funding will be used to expand and refine the scope of the research, run additional tests with different populations, and towards the creation of a consortium for meaning economies.
[1] With our current incentive structure, we’re likely to soon see a proliferation of arms race dynamics among nations and labs, AI girlfriends and hyper-stimulation, AI-driven political manipulation, etc. We expand on this here and here.
[2] Such solutions often involve an economic middleman, like a health insurance company, which aggregates purchasing power but allocates it using a publicly-audited, non-market mechanism. Such a middleman collects from diverse actors, but ensures, e.g., that hospitals are paid by the health level they generate or maintain in a population. Thus, market incentives are aligned with a human-beneficial outcome. We believe these kinds of mechanisms can be greatly aided by the kind of large-scale qualitative data LLMs can harvest and reason about.
[3] We have built and developed this part of the system. You can try articulate your values here.