Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
0

Grant for Research into Infrabayesian Physicalism (GRIP)

Lorxus avatar

Paul Rapoport

Not fundedGrant
$0raised

Project summary

Research regarding technical/mathematical aspects of alignment, primarily using the infrabayesianism framework.

What are this project's goals and how will you achieve them?

  1. Provide a readable redistillation (or two) of the infrabayesian framework. A first draft of this already exists, but more work is needed to turn this from a minimum viable proof of concept (which a few people have already used to great effect) into something clearer and more approachable. Funding would allow me time to write and edit and would buy you a readable writeup of a notoriously tangled (but useful!) framework.

  2. Establish a major result/definition for the ALTER Prize, probably within IB physicalism or logic. Funding would give me time to fully engage with underexplored directions in IB and would buy you a math PhD's expertise and focus.

  3. (Stretch goal) If you grant me enough funding, a research collaborator would be able to produce a companion writeup to mine targeted to a less technical (with respect to math) audience, where mine will not shy away from the necessary mathematical machinery. Sufficient funding would let me bring that collaborator in fully and pay for their time as well.

  4. (Stretch goal) If you grant me enough funding, that same collaborator and I could work togetherfor long enough to create/train a proof of concept RL agent that wins UDT puzzles like Troll Bridge or Perfect Transparent Newcomb.

How will this funding be used?

Primarily as a research stipend/living expenses for me (and possibly also a research colleague I'm already working well with), but also partially as living expenses/needs for my grandmother, who lives with me.

Who is on your team and what's your track record on similar projects?

If funding permits, I'd also bring in Charles Wittel, who may also be applying for funding here.

I have earned a PhD in pure math - thesis preprint here. That research went decently well, especially conducted as it was in the middle of a pandemic; apart from that, no particular track record yet.

What are the most likely causes and outcomes if this project fails? (premortem)

Likely causes: burnout, personal/familial injury or illness, I turn out to be much worse at math-adjacent research than at pure math research

Likely outcomes: not getting very much done on the writing or research

Likely causes: IB turns out to be the wrong framework

Likely outcomes: tossing out the entire plan and going with a different framework (like finite-factored sets, maybe?)

What other funding are you or your project getting?

None so far.

CommentsSimilar7
mfatt avatar

Matthew Farr

Collaboration to develop a DAG formalism to express instrumentality

Stipend to upskill under and collaborate with Sahil K and Topos for 4-6 months, seeking to obtain teleological DAGs as the dual of causal DAGs

Technical AI safety
3
2
$0 raised
alexhb61 avatar

Alexander Bistagne

Alignment Is Hard

Proving Computational Hardness of Verifying Alignment Desirata

Technical AI safety
4
13
$6.07K raised
zrkrlc avatar

Clark Urzo

Blackbelt

A scalable, non-infohazardous way to quickly upskill via digestible, repeatable exercises from papers and workshops.

Technical AI safety
2
3
$0 raised
JaesonB avatar

Jaeson Booker

Jaeson's Independent Alignment Research and work on Accelerating Alignment

Collective intelligence systems, Mechanism Design, and Accelerating Alignment

Technical AI safety
2
0
$0 raised
LawrenceC avatar

Lawrence Chan

Exploring novel research directions in prosaic AI alignment

3 month

Technical AI safety
5
9
$30K raised
Bart-Bussmann avatar

Bart Bussmann

Epistemology in Large Language Models

1-year salary for independent research to investigate how LLMs know what they know.

Technical AI safety
5
0
$0 raised
abramdemski avatar

Abram Demski

Understanding Trust

Funding Basic Theoretical AI Safety Research

5
1
$100K raised