You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
This project aims to develop concrete proposals for "Live Governance" - new governance architectures that use AI to enable more responsive and contextually-aware public administration while maintaining coherence and consistency. Rather than forcing standardization, Live Governance leverages AI to allow local adaptation of rules and processes while preserving their underlying spirit and purpose.
Develop detailed policy propsoals and/or proofs-of-concept for Live Governance tools. This could include:
Live Regulation - AI systems that provide context-sensitive regulatory guidance
Live Administration - Adaptive processes for government services and licensing
Live Democracy - Tools for incorporating local perspectives into legislation
Live Accountability - Enhanced systems for public access to government information
Identify key technical requirements and implementation challenges
Create clear explanatory materials to make Live Governance concepts accessible to policymakers and other stakeholders
These will be achieved through:
One day per week of dedicated research over 6 months.
Regular engagement with the High Actuation Spaces (HAS) community for feedback and refinement
Development of policy proposals and proof-of-concept proposals
Outreach to potential collaborators in government, legal tech, and policy
The $6,000 grant would support one day per week of research work over 6 months, enabling focused development of Live Governance proposals and proofs-of-concept. A $3,000 grant would support an abbreviated research effort that would scope possible Live Governance tools without developing them into policy proposals or proofs-of-concept.
The project part of the High Actuation Spaces research agenda led by Sahil Kulshrestha. It has emerged from engagement with a broader High Actuation Spaces community, and we anticpate this collaboration will continue. More information about the conceptual framemwork underlying this proposal can be found at the Live Theory Lesswrong Sequence.
Most likely causes of failure:
Technical requirements exceed near-term AI capabilities
Regulatory or privacy concerns limit implementation possibilities
Unable to effectively communicate complex concepts to stakeholders
Difficulty balancing local adaptability with systemic coherence
No funding received in the last 12 months. There is a pending grant application with the Future of Life Institute.