You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Project summary
Short version
I am building an early-stage AI governance and safety project focused on oversight, alignment and controllable AI systems before highly capable AI becomes widely deployed.
The project combines technical AI safety research, governance ideas and infrastructure concepts intended to reduce risks from uncontrolled or misaligned AI systems in the future.
This funding would allow me to work on the project full time for 12 months, continue research and development, and file four international patent applications related to the system architecture and safety mechanisms.
The goal is to move the project from concept and private research into a more mature and testable foundation.
Long version
Over the last years, AI capability progress has accelerated much faster than governance, oversight and safety infrastructure.
Most companies are focused on making models more powerful. Much less attention is being placed on how future systems should be controlled, audited, restricted and aligned once they become more autonomous and strategically capable.
I have been independently developing a project focused on this gap.
The idea behind the project is to create AI governance and safety infrastructure designed to help humans maintain oversight and control over advanced AI systems instead of reacting after problems appear.
The work includes:
AI governance concepts
alignment and containment ideas
oversight systems
safety architecture
human-controlled escalation structures
predictive and defensive AI safety mechanisms
long-term infrastructure concepts for safer deployment of advanced AI systems
Part of the work also involves developing several original technical concepts which I believe should be protected before being publicly discussed in detail. Because of this, part of the funding would be used to file four international patent applications connected to the safety architecture and core infrastructure ideas.
I am currently working independently and bootstrapping the project myself. Funding would mainly buy time: the ability to work on the project full time instead of splitting focus between survival and research.
The long-term goal is to help contribute to a future where increasingly capable AI systems remain governable, auditable and aligned with human interests.
What are this project's goals? How will you achieve them?
The main goals are:
Continue development of the project full time for 12 months
Develop clearer technical architecture and governance models
Build early prototypes and demonstrations
Refine safety and oversight mechanisms
File four international patent applications
Expand research, writing and external collaboration
Prepare the project for future partnerships, grants or institutional support
I plan to achieve this through:
full-time independent research and development
technical prototyping
structured documentation
discussions with researchers and builders in AI safety and governance
iterative testing and refinement of the concepts
legal and patent work related to the core safety architecture
How will this funding be used?
The funding would mainly be used for:
living expenses during 12 months of full-time work
international patent filing costs
legal and IP-related support
cloud infrastructure and AI tooling
research expenses
company and operational costs
travel/networking if relevant opportunities arise
The goal is to create enough runway to focus fully on the project during a critical early stage.
Budget
Living expenses / runway (12 months): $42,000
International patent filings (4): $8,000
Legal and IP support: $15,000
Compute, AI tools and infrastructure: $15,000
Research and development expenses: $10,000
Operations, administration and contingency: $10,000
Total: $100,000
Who is on your team? What's your track record on similar projects?
At the moment I am the sole founder and researcher working on the project.
My background is unconventional and mostly independent rather than institutional. I have spent years researching AI safety, governance, long-term risk and strategic oversight questions connected to advanced AI systems.
Most of the work so far has been private, conceptual and self-funded.
I have also been actively engaging with AI discussions, policy ideas and governance questions publicly while continuing to refine the project architecture and long-term direction.
The project is still early stage, but the goal of this funding round is precisely to make it possible to transition from independent research into a more mature development phase.
What are the most likely causes and outcomes if this project fails?
The most likely causes of failure are:
lack of funding and runway
inability to dedicate full-time focus
difficulty competing with larger organizations
technical complexity
inability to build the right network and partnerships early enough
If the project fails, the likely outcome is simply that the ideas never mature into deployable systems or practical governance infrastructure.
I also think there is a broader risk that society continues advancing AI capabilities faster than safety, oversight and governance systems develop.
Even partial progress on governance and safety infrastructure could still be valuable.
How much money have you raised in the last 12 months, and from where?
None.
The project has so far been self-funded.