Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
3

Conceptual Boundaries Workshop (already funded, but some additional things)

Technical AI safetyGlobal catastrophic risks
Chris-Lakin avatar

Chris Lakin

Not fundedGrant
$0raised

Updates

2024 February 9:

Got a $40k ACX grant for this. See https://manifund.org/projects/seed-fund-for-bo

2024 Jan 13:

The people I invited to the workshop all seem to have research funding anyway, and right now I'd be most interested in funding a second workshop on this topic that Davidad has asked me to plan. Mathematical Boundaries Workshop will be larger, 5-days, in Berkeley (LightHaven), and in April. Seeking up to $93,392 but probably more like $75k. Click here to email me, and I will send you the planning document and budget. (Or email me: chris@chrislakin.com)

2023 Dec:

We've received a $5k grant from LTFF for this workshop, and they have said it can go towards seed funding if we want.

Details

We're running https://formalizingboundaries.ai/

The workshop is already fully-funded and now we're also looking for a seed fund for empirical projects ideated at the workshop. 

At the workshop, we will decide research directions and brainstorm empirical projects with the "chefs"— people who have thought about boundaries-related ideas a lot (Davidad, Critch, Garrabrant, etc.)— and assign them to the “cooks”, people who can execute on the projects and have the time to take on a new project.

Having pre-committed funding to support future work could be the difference between conversations stopping at the end of the workshop and individuals changing their research agendas to immediately pursue continuations of the workshop projects immediately without funder delays. 

$40k? [1 month of funding x 4 cooks x $10k/mo]

We already have many great applications from potential "cooks".

Future workshops

It seems like there's far more interested people than we have space at this planning workshop, and we are also in the early stages of planning a larger boundaries workshop in Berkeley in Spring 2024. We are also exploring organizing other workshops with <related big research group>. We may use funds for running these workshops, too.

Who is on your team and what's your track record on similar projects?

Evan Miyazono:

  • Just started https://atlascomputing.org/ which is meant to be the org the Open Agency Architecture.

  • Led metascience and special projects at Protocol Labs for 6 years

    • Evan’s team included davidad

  • Recently organized a metascience workshop with the Santa Fe Institute: https://www.santafe.edu/events/accelerating-science-risks-incentives-and-rewards 

  • His team created and ran the Protocol Labs Research RFP program, hypercerts, Funding the Commons (conference series), The Arcological Association, gov4git, and was the first supporter of Discourse Graphs.

    • https://research.protocol.ai/blog/2023/pausing-pl-research-open-research-grants/ 

    • https://hypercerts.org/

    • https://fundingthecommons.io/

    • https://arcological.xyz/ 

    • https://gov4git.org/ 

    • https://discoursegraphs.ai/ 

    • evan@atlascomputing.org

  • PhD in Applied Physics at Caltech

Me:

  • I wrote the compilation on boundaries when the topic wasn’t organized

  • Currently funded for independent research on boundaries by a private donor

  • Ideated this workshop 

  • Past experience in physics (CMU) and operations (eg ran ops for the ELK Winners’ Retreat)

  • Funded for part-time rationality research by CFAR

  • chris@chrislakin.com

Comments2Similar7
Chris-Lakin avatar

Chris Lakin

Seed fund for boundaries-based empirical AI safety projects after workshop.

ACX Grants 2024
1
2
$40K raised
🐝

Sahil

[AI Safety Workshop @ EA Hotel] Autostructures

Scaling meaning without fixed structure (...dynamically generating it instead.)

3
7
$8.55K raised
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $23.5K
Dhruv712 avatar

Dhruv Sumathi

AI For Humans Workshop and Hackathon at Edge Esmeralda

Talks and a hackathon on AI safety, d/acc, and how to empower humans in a post-AGI world.

Science & technologyTechnical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
0
$0 raised
🐰

Avinash A

Terminal Boundary Systems and the Limits of Self-Explanation

Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
1
$0 / $30K
aCFAR avatar

Anna Salamon

aCFAR 2025/6 Fundraiser

Revised CFAR workshops. Same Sequences-epistemics, same CFAR classics, more support for individual freedom, sovereignty, and authorship

3
0
$0 / $125K
LawrenceC avatar

Lawrence Chan

Exploring novel research directions in prosaic AI alignment

3 month

Technical AI safety
5
9
$30K raised