Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Do humans maintain independence in normative decision-making when using AI?

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
jlssapieninstitute avatar

Dr. Jacob Livingston Slosser

ProposalGrant
Closes December 27th, 2025
$0raised
$50,000minimum funding
$160,000funding goal

Offer to donate

42 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

AI makes countless normative determinations for us across society, advising individuals on uncertain choices and moral dilemmas. Existing evidence suggests that offloading some of the cognitive work in these kinds of choices may erode our own abilities to do so. AI may make choices for us when they give what we perceive as advice, suggestion or simple conversation that we may see as "our own". A classic example of choice blindness.

This project seeks to understand the dynamics of one type of value replacement by combining research on:

  • choice blindness (when people don't detect when their judgments are altered and confabulate explanations for positions they didn't originally hold); and,

  • deliberative decision-making in AI/human multi-agent coordination games (where participants propose and vote on rules to solve normative scenarios)

This is a pilot project of the larger agenda at the Sapien Institute and will act as a catalyst for further research on how AI shapes what we value.

What are this project's goals? How will you achieve them?

There are three goals:

  • Immediate: a proof of concept for a practical and repeatable method for understanding some of the basics of normative choice mediation in AI-human interaction. This will be pursued through empirical experiments where humans and AI agents play deliberative games in solving moral and legal scenarios, and testing when AI reasoning and justification for those choices are mistaken as one’s own.

  • Intermediate: Delivering Policy briefs to policy makers covering the basics of normative sovereignty, why it matters for AI accountability, how AI systems might erode it through mediating normative interpretation, how to use this testing protocol, and why regulators should care. I can utilize strong networks in multiple jurisdictions to provide this feedback first hand.

  • Intermediate to long term: Use the success of the project as a springboard for further funding of the Sapien Institute to secure more sustainable funding for related experiments in normative reasoning. I will publish the outcomes of the project in high impact fora and make the method open-source, repeatable and available for scaling. 

How will this funding be used?

The minimum funding is to allow me to purchase compute (or more efficiently purchase equipment) to run the experiments, and cover salary costs for myself and research assistants as needed for 6 months. The fully funded total is for up to 18 months with a view to hire a permanent assistant.

Who is on your team? What's your track record on similar projects?

At present the team is me. I have secured €1.1M+ in competitive research funding including Carlsberg Foundation for a project running similar empirical studies in legal linguistics and emerging technology, and co-development on a project on algorithmic decision making in public administration from Danmark Frie Forskningsfond. I have authored 10+ publications on AI governance, legal linguistics, legal cognition and European human rights law. I am a former Assistant Professor who would like to spend his time solving problems rather than sit in meetings.

There is ample potential for the growth of the Sapien Institute with a view to expand the team in the long term.

What are the most likely causes and outcomes if this project fails?

The most likely cause of failure is a null or ambiguous result. But of course, a null result is quite publishable too. Success of the startup project of the Sapien Institute is not necessarily linked to the "success" of the initial experiment.

How much money have you raised in the last 12 months, and from where?

none.

CommentsOffers

No comments yet. Sign in to create one!