Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
tsvibt avatartsvibt avatar
Tsvi Benson-Tilsen

@tsvibt

former AGI alignment at MIRI, now human intelligence enhancement

https://tsvibt.blogspot.com/
$79,517total balance
$0charity balance
$79,517cash balance

$0 in pending offers

About Me

Trying to reduce existential risk from AGI. Making http://berkeleygenomics.org/

Projects

human intelligence amplification @ Berkeley Genomics Project

Comments

Ambitious AI Alignment Seminar
tsvibt avatar

Tsvi Benson-Tilsen

4 days ago

Overall: I recommend funding this to at least ~$240K, the level needed for the Seminar + 1-year fellowship.

I researched AGI alignment at MIRI for about 7 years; in my judgement, the field is generally not well set-up to appropriately push newcomers to work on the important difficult core problems of alignment. Personally my guess is that AGI alignment is too hard for humans to solve at all any time soon. But, if I were wrong about that, I would probably still think that novel deep technical philosophy about minds would be a prerequisite. I'm not up to date, so this impression might be partly incorrect, but broadly my belief is that most AI safety training programs are not able to create a context where people have the space, and are spurred, to think about those core problems.

Since this program is new, it's hard to judge. I've worked with Mateusz on alignment research, and I think he gets the problem, and the description of the program seems around as promising as any I've seen. Because the space hasn't found great traction yet, trying new things is especially valuable. So, IF you want to fund AGI alignment research, this should probably be among your top investments.

Further, if you want to fund this program, I'd strongly recommend funding it at least to the minimum bar to continue it with the 1-year fellowship. The reason is that learning to approach the actual AGI alignment problem is a slow process that probably needs multiple years, with sparse but non-zero feedback; so the foundations laid down in the month-long seminar might tend to somewhat go to waste without longer-lasting scaffolding.

stable working pods of three to five people

I would suggest creating space for even smaller groups (the standard in Yeshiva, I gather, is pair study, and personally I need substantial time/space set aside for solo thinking). The area is very strongly inside-view-perspective thirsty, so an admixture of space for those to grow is needed, even given the opportunity cost. You could try to offload that to before and after the program, but I'd suggest also making space for it during. E.g. a "Schelling" time for 2 hour solo walks / thinks, or whatever.

We actually consider it very likely that the project "fails" in the sense that it will complete with none of the Fellows producing any clearly promising research outputs or directions at building pieces of a solution. The reason/cause of this would be that the problem being tackled is one of great difficulty, very slippery, and with difficult feedback loops with reality.

This is an unbelievably based statement, which on the object level would hopefully contribute to making an environment where actual new perspectives (rather than just the Outside the Box Box https://www.lesswrong.com/posts/qu95AwSrKqQSo4fCY/the-outside-the-box-box ) can grow, and furthermore indicates some degree of hopeworthiness of the organizers on that dimension.

participants will share their learning with each other through structured showcases and peer instruction

Sounds cool, but do keep in mind that this could also create a social pressure to "publish or perish" so to speak, leading to goodharting. A not-great solution is to make it optional or whatever; it's not great because it's sort of just lowering standards, and presumably you do want to have people aiming to work hard and do the thing. Maybe there are better solutions, such as somehow explicitly and in common knowledge making it "count for full points" to present on "here's how I have a really basic/fundamental question, and here's how I kept staring at that question even though it's awkward to keep staring at one thing and not have publishable technical results from that, and here's my thoughts in orienting to that question, and here's specifically why I'm not satisfied with some obvious answers you might give". Or something. In other words, alter the shape of the landscape, rather than making it less steep.

Selection criteria for the fellows:

I would suggest somewhat upweighting something like "security mindset", or (in the same blob), something like "really gets that you can have a plausible hypothesis, but it's wrong, and you could have quickly figured out that it's wrong by actually trying to falsify it / find flaws in it, but you probably wouldn't have quickly figured out that it's wrong just by bopping around by default". And/or trying to bop people on the head to notice that this is a thing, though IDK how to do that. This is especially needed because, since we don't get exogenous feedback about the objects in question, we have to construct our own feedback (i.e. logical reasoning about strong minds).

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

4 months ago

Thank so much Rahul! Key support. We will continue working hard and hopefully smart. @rahulxyz

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

4 months ago

Progress update

What progress have you made since your last update?

Here's what we've been up to since March, until now (September 15):

* Several events, including a mini-conference in April with 6 speakers on genomics and reproduction.

* Our main event: Reproductive Frontiers Summit 2025. In June, over 100 attendees--scientists, entrepreneurs, parents, and enthusiasts--gathered, learned, and planned.

* We started publishing talks from our summit, with more coming: https://www.youtube.com/@BerkeleyGenomicsProject

* Several articles (https://berkeleygenomics.org/Explore), including a statement of a vision for a future with reprogenetics (https://berkeleygenomics.org/articles/Genomic_emancipation.html) and a high-level visual roadmap for reprogenetics (https://berkeleygenomics.org/articles/Visual_roadmap_to_strong_human_germline_engineering.html)

* Some behind-the-scenes support for the reprogenetics field.

* Research on chromosome selection.

* BGP in the news: https://www.wsj.com/us-news/silicon-valley-high-iq-children-764234f8


What are your next steps?

Upcoming:

* Talks and other videos

* White paper(s) on chromosome selection (https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html#method-chromosome-selection)

* Articles

Follow for updates:

https://x.com/BerkeleyGenomic

https://www.youtube.com/@BerkeleyGenomicsProject

https://bsky.app/profile/berkeleygenomics.bsky.social

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

4 months ago

Thanks @username and whoever directed the donation! (For a second I thought you were trying to pressure me into buying the book haha (which I did months ago).)

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

10 months ago

@DusanDNesic Thank you! We can't guarantee success but we can guarantee that we will work hard :)

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

10 months ago

@Rafe Thank you!

Responded to some points here: https://x.com/BerkeleyGenomic/status/1909101431103402245

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

10 months ago

@matiroy Thanks Mati!

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

10 months ago

@rahulxyz Thanks! Much appreciated. We'll make it happen. Maybe.

human intelligence amplification @ Berkeley Genomics Project
tsvibt avatar

Tsvi Benson-Tilsen

10 months ago

@Kaarel Thanks for your offer!

I'm unsure whether I agree with this strategically or not.

One consideration is that it may be more feasible to go really fast with [AGI alignment good enough to end acute risk] once you're smart enough, than to go really fast with convincing the world to effectively stop AGI creation research. The former is a technical problem you could, at least in principle, solve in a basement with 10 geniuses; the latter is a big messy problem involving myriads of people. I have substantial probability on "no successful AGI slowdown, but AGI is hard to make". In those worlds, where algorithmic progress continually burns the fuse on the intelligence explosion, a solution remains urgent, i.e. prevents more doom the sooner it comes.

But maybe good-enough AGI alignment is really extra super hard, which is plausible. Maybe effective world-coordination isn't as hard.

But I do mostly agree with this in terms of long-term vision.

Transactions

ForDateTypeAmount
human intelligence amplification @ Berkeley Genomics Project22 days agoproject donation+1000
human intelligence amplification @ Berkeley Genomics Project22 days agoproject donation+2410
human intelligence amplification @ Berkeley Genomics Projectabout 1 month agoproject donation+80
Manifund Bankabout 1 month agowithdraw9000
human intelligence amplification @ Berkeley Genomics Projectabout 2 months agoproject donation+75000
human intelligence amplification @ Berkeley Genomics Project4 months agoproject donation+10000
human intelligence amplification @ Berkeley Genomics Project4 months agoproject donation+27
Manifund Bank5 months agowithdraw8984
human intelligence amplification @ Berkeley Genomics Project5 months agoproject donation+100
human intelligence amplification @ Berkeley Genomics Project6 months agoproject donation+20
human intelligence amplification @ Berkeley Genomics Project8 months agoproject donation+50
Manifund Bank8 months agowithdraw9001
human intelligence amplification @ Berkeley Genomics Project8 months agoproject donation+205
human intelligence amplification @ Berkeley Genomics Project9 months agoproject donation+500
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+2000
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+7000
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+50
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+60
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+100
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+500
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+1000
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+1000
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+100
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+200
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+50
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+5000
human intelligence amplification @ Berkeley Genomics Project10 months agoproject donation+50