Project summary
What are we doing?
We're creating a tool called HI2T that uses AI to study and influence people's views on AI safety. It analyses intersecting demographic factors like age, gender, education etc. and derives insights into how demographic clusters think. It can collect data and craft targeted persuasive or educational messages based on these granular insights.
Our initial goal is to acquire a patent for the tool. A preliminary review by a specialist patent attorney from the firm Page White Farrer suggests it is likely patentable, and we found no similar patents in initial searches of relevant databases.
Why are we doing this?
Our ultimate aim is to enhance understanding of AI and empower action on a wide scale. Our tool is designed to collect opinions and raise awareness about AI safety issues, feeding this information back into the development process. To ensure responsible development and ethical use, we're seeking funding for patent protection. Additionally, profits from potential licensing of the tool will be reinvested into effective altruism efforts.
What are the benefits/harms?
It may be possible to generalise such a tool to enhance data collection efficiency and communication efficacy in various domains, including:
Policy making and AI system design
Public health campaigns
Climate change advocacy
Event impact tracking
Social and market research
Political campaigning
Product development
Public relations
Advertising
If this proves to be the case, the potential benefits are significant. Increasing data collection and processing efficiency and coupling it with custom tailored messaging has the potential to increase education and engagement across many areas that represent significant risks to civilisation and the biosphere.
However, some of these areas, like political campaigning and advertising, present risk of social harm, especially if used by misaligned AI systems or bad actors with access to same. The potential for the tool to generalise increases the potential for misuse. While our tool's full capability across these domains isn't yet confirmed, similar techniques have been used before, though in cruder, less efficient forms (see: Cambridge Analytica). Due to its potential impact, our most urgent goal is to patent this technology with as wide a set of claims as possible to ensure its responsible development and deployment.
What does HI2T stand for anyway?
Hypergranular Insight and Influence Tool. Catchy, right?
What are this project's goals and how will you achieve them?
We're currently seeking funding for the project's initial phase: patenting the tool. Our tool builds on existing methods but introduces a "novel inventive step", which is critical for patent approval. While software-related patents can be tough to secure, the attorney we spoke to believes in our tool's patentability.
We've drafted a patent application but need funding for expert attorneys to refine and submit it.
After submission, we'll seek additional funding to:
Create a fully featured prototype.
Test its effectiveness using best-in-class data.
Perform replication studies of high-quality research to measure its real-world relevance.
Continuously improve based on these benchmarks.
Automate opinion collection and AI safety outreach, tailoring communication to diverse groups.
Licence our tool to interested parties (pending case by case independent ethical review).
Reinvest licensing profits into effective altruism, a goal to be enshrined in our organisational by-laws.
The broad goal is to blend and automate traditional polling and analysis with targeted messaging for scalable, impactful communication. This automation can boost the efficiency of research and communication greatly.
How will this funding be used?
The primary use of the minimum funding (£10,000, approx $12,500) will be attorney fees to enhance our patent application's approval chances.
Submitting a UK patent application costs around £7000 . Extra funds will cover additional legal costs during the patent review.
If we secure more than the minimum funding, it will be used for:
Supporting the project lead's full-time work for three months, focusing on obtaining long-term funding and developing the prototype.
Compensating part-time collaborators to help build, test and refine the prototype tool.
Who is on your team and what's your track record on similar projects?
Eden Morrison - Independent AI advocate, lighting designer, technical manager for EartH Hackney, BECTU branch committee member, science communication artist.
Seth A. Herd - CEO and manager of eCortex, Inc from 2012-2022, performing research on higher cognition and human biases using neural network models of brain function. Currently at the Astera Institute, researching alignment of brain-like AI systems.
Dr. Alex Allan - CTO and co-founder of Kortical, building and deploying data science and machine learning AI systems for large companies and government organisations.
Victoria Brook - President, EA Edinburgh. Contract work in EA distillation & communication. Content moderator for the EA forum. Working with aisafety.info and aisafety.quest.
While our team is newly assembled, our skills and backgrounds are varied and complementary. We're confident in our ability to manage this project’s initial phase. Initial funding will be spent on patent attorneys with specialist expertise in the AI space. As we progress, we'll be expanding our team and seeking external assistance. We would also welcome guidance and input from anyone interested in funding the project.
What are the most likely causes and outcomes if this project fails? (premortem)
The primary risk of this project's initial phase is the UK intellectual property office deeming our system unpatentable.
A secondary concern is overlooking an existing patent for a similar tool during our initial searches.
If either occurs, funds spent on this phase might largely be considered lost. However, there could still be positive outcomes:
A non-patentable verdict allows us to redirect future funds from patenting efforts to development.
The tool might be patentable in other regions.
Finding a similar patented tool identifies its creators, facilitating targeted AI safety advocacy.
It's important to note that once the patent application is filed, work on the tool begins. However, due to potential misuse concerns, our first step is securing IP protection to ensure ethical use and development.
What other funding are you or your project getting?
For this project's first phase, based on legal advice, we're only discussing details with those under an NDA. Sharing it publicly could jeopardise our patent chances.
Hence, we're solely seeking funding on Manifund for this phase. Once we've applied for the patent, we'll consider funding from other selected sources.
Currently, our team is donating all the development work.
For potential funders wanting further information, we are happy to provide this but request that you sign an NDA.