Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
5

Empirical research into AI consciousness and moral patienthood

🐔

Robert Long

ActiveGrant
$12,020raised
$12,000funding goal
Fully funded and not currently accepting donations.

Project summary

5-week salary to further empirical research into AI consciousness and related issues bearing on potential AI moral patienthood.

What are this project's goals and how can they be achieved?

  1. Write a strategy doc for research on AI consciousness, and related issues relevant to AI moral patienthood, and elicit feedback.

  2. Spend more time on empirical research projects about AI consciousness, and (as before) related issues relevant to AI moral patienthood—henceforth, I’ll just say “AI consciousness” as shorthand.

How will this funding be used?

Time working on the project goals that would otherwise be used for public-facing work (e.g. speaking to journalists, writing magazine articles, appearing on podcasts) and applying for funding and jobs.

Who is on the team and what's their track record on similar projects?

The salary is for Rob Long, an expert on AI consciousness. See Rob’s newly-released report on AI consciousness here. Rob just completed the philosophy fellowship at the Center for AI Safety, and before that he worked on these issues at the Future of Humanity Institute as the head of the Digital Minds Research Group. He has a PhD in philosophy from NYU, supervised by David Chalmers. 

What are the most likely causes and outcomes if this project fails? (premortem)

  • Some personal (e.g. family or health) issue taking up Rob’s time.

  • Rob failing to ward off his time to be taken up by other work priorities.

  • Failing to properly scope the project goals.

What other funding is this person or project getting?

None.

Comments3Donations6Similar4
Bart-Bussmann avatar

Bart Bussmann

Epistemology in Large Language Models

1-year salary for independent research to investigate how LLMs know what they know.

Technical AI safety
5
0
$0 raised
LucyFarnik avatar

Lucy Farnik

Discovering latent goals (mechanistic interpretability PhD salary)

6-month salary for interpretability research focusing on probing for goals and "agency" inside large language models

Technical AI safety
7
4
$1.59K raised
lisathiergart avatar

Lisa Thiergart

Activation vector steering with BCI

Technical AI safety
7
6
$30.3K raised
LawrenceC avatar

Lawrence Chan

Exploring novel research directions in prosaic AI alignment

3 month

Technical AI safety
5
9
$30K raised