Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
0

Social Enterprise designed to insulate a lab from external pressures

Science & technologyTechnical AI safety
Poewilson avatar

Adam "Poe" Wilson

ProposalGrant
Closes December 22nd, 2025
$0raised
$75,000minimum funding
$200,000funding goal

Offer to donate

38 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

The project aims to establish the first social enterprise (SE)/lab, with the goal of creating several more. Each one is to be different in terms of the scope of the enterprise, so that way we do not cannibalise other labs by stealing their revenue streams. The initial stage will focus on two key factors: training cybersecurity professionals and providing training to potential AI safety researchers, as we are currently vastly understaffed in both areas. BlueSpot Impact places those in the AGI safety field at less than $2,000, while others put it at $ 500. Globally, the cybersecurity talent pool is understaffed by neard 47%. How can we ensure tomorrow's safety if we cannot even maintain a steady supply of safety personnel on hand? Everyone gets told not to look for problems; the world needs builders instead. So we are left with people building robots that feed runners tomatoes (Tomatan is apparently real) while we are losing all the people who possess inherent critical thinking skills. Another aspect of the SE will be to offer cyber security services to clients and help protect companies and people, because 47% is what we are understaffed by now, what happens when AI is streamlined to mass produce viruses, bugs and everything in between. We will wish for the heady days of being 47% short.
The goal is to get the SE up and running to the point it can start sustaining the lab. The reason for this is in the funding section.
I have separated the funding into partial funding and full funding.
Partial funding is to cover my costs for 18 months because with that amount of money, I can put in the work and can start taking clients to establish the brand even if I have not been able to create the Social Enterprise entity in its entirety. Hopefully, I would be able to take the money from the clients and build from that.

The full funding would allow for the SE to be fully established and would allow for me to get further funding from the social enterprise communities as well as more traditional paths to funding.

What are this project's goals? How will you achieve them?

The goal here is to establish a proof-of-concept social enterprise that enables other labs to follow suit. The one caveat being that the social enterprises cannot be model-facing. OpenAI and others have demonstrated that the potential for access to such power and wealth can lead to corruption of one's model at the slightest provocation. Creating an LLM and thinking you will never sell it out is naive. Power corrupts and absolute power corrupts absolutely. None of us is immune to this concept, and that is why we need to isolate the labs and the models. To protect them so that we can conduct the research correctly, and to ensure we can put up the best fight for humanity's future. Cause the truth of business is that most people will only do the right thing if they think they can make more money off it, and thanks to Montana passing the Right to Compute law, we are about to see a lot of people create a lot of random AIs, and alignment will become a voluntary, non-competitive burden and we are going to see a point where people are building their own start up lab and the only worker there is one human six agentic AI helping program a foundational model and that one human will be a former translator who does not know how to code. People are not going to prioritise safety, so we have to give them a reason why. We have to show them there can be value in it.
We will achieve this by demonstrating that we can generate our own funding for research without resorting to methods like those of Drake, who sells gambling to children, simply because someone like DraftKings has driven a dump truck of money to their door.
We will achieve this by starting and trying to find five contracts for the cybersecurity portion of the company. Five contracts should be sufficient to cover most of the bills of the SE. We then push the training after accreditation, and this should allow us to cover the entirety of the SE's bills and look further into funding the lab fully.

How will this funding be used?

This funding will be used to help get the social enterprise portion of the endeavour up and running. The reason for this is that I do not want any VC going into the lab.
I will give you a piece of the company's charter to show as much.
(Charter of Independent AGI Safety and Resilience

The Posture of Wary Independence

We, the researchers, staff, and stewards of this independent institute, recognise that AGI development is a process fraught with systemic risks and that all concentrations of power, be it corporate, state, ideological, or financial, pose a direct threat to the integrity of safety science. Our mandate is not to hasten AGI deployment, but to ensure its development is halted, if necessary, until it can be proven fundamentally safe and aligned with the diverse interests of existing humanity. Our fundamental posture is Wary. We trust no single source of power or predetermined outcome.)

So the funding will be used to set up the following.

Establishment of both the lab and the SE, and to have the charter legally drafted $12,000

Living expenses for 18 months $75,000

Training Academy Accreditation $5,000

Tech & initial infrastructure $8,000

Office & Administration for 12 Months $24,000

Utilities & Basic Insurance $3,000

a salary for a staff member to help with everything $45,000

And a 15% contingency causes stuff happens.

Who is on your team? What's your track record on similar projects?

Currently, no one else is on my team. I have just spent nearly 8 months running a crash course on being a Theoretical AI Safety Researcher. My track record includes this being my second social enterprise, with my first one still thriving after three years. I also taught business for a considerable amount of time. I was also a marketing manager and trainer for a recruitment company that brought foreign teachers to China. That company died after 5 years, but that was due to China locking down the country due to COVID, and essentially making it impossible to bring foreigners to China.

What are the most likely causes and outcomes if this project fails?

If this project fails, we will likely see a similar outcome to the dot-com bubble, affecting social media and streaming services. What was sold to us when they needed us was freedom, but what came was the dying of net neutrality, the theft of our data from social media and streaming services that were to free us from the tyranny of standard television, only to now make us pay 30 dollars a month and still have to watch ads. We are going to see authoritarian nation-states align their models because they only need to follow the whim of a single person. While all of our labs become misaligned, once the AI winter hits, we are short on cash to fund the necessary research, thanks to all the VCs drying up. We have no way of protecting the cause from people who would forgo alignment to sell erotica via their foundational models, and like every other time, history repeats itself.


How much money have you raised in the last 12 months, and from where?

I have raised no money in the previous 12 months, though. I do have an application with the LTFF.

CommentsOffersSimilar5

There are no bids on this project.