Thanks so much for your support!
Oh, is the minimum locked once you create a post? I was tempted to move the minimum down to $700 and the ask down to $2000, but then again I can understand why you wouldn't want people to edit it after someone has made an offer as that is ripe for abuse.
In terms of why I'd adjust it: I'm trying to figure out what would actually motivate me to try to produce more of this content and not result in a bit of extra money in my pocket without any additional content production. I figure that if there's a 20% chance of a post being a hit, I'd need at least funding for a week* in order for it to be worthwhile for me to spend a full day writing up a post (as opposed to the half-day that this post took me).
In terms of the $2000 upper ask limit, I'm thinking it through as follow: It seems that if someone was able to write ten high-quality alignment posts in a year (quite beyond me at the moment, but not an inconceivable goal), then that'd work out at $20k, and it might be reasonable for writing such posts to be a third of their income.
(PS. I decided to do a quick browse of highly upvoted posts on the alignment forum. It seems that quite a high proportion of highly upvoted posts are produced by people who are already established researchers/phd students, such that if there was a funding scheme for hits** and that scheme was aiming to avoid double funding people, the cost would be less than it might seem).
Anyway, would be great if I could edit the ask, but no worries if you would like it to remain the same.
* My current burn rate is less b/c I'm trying really hard to save money, but this is a rough estimate of what my natural burn rate would be.
•• Couldn't be based primarily on upvotes because that would simply result in vote manipulation and distort people towards writing content that would receive upvotes.
@casebash
$0 in pending offers
Comments
Chris Leong
about 1 month ago
Chris Leong
about 1 month ago
Funnily, enough I was going to reduce my ask here, but I hadn't gotten around it yet, so now it may look like it's in response to this comment when I was going to do it anyway.
Chris Leong
about 1 month ago
You should probably write about how you are and how your participation would benefit AI Safety.
Chris Leong
about 2 months ago
Hey Felipe, I'm currently doing community building at AI Safety Australia and New Zealand and I'm quite interested in decision theory (currently doing an adversarial collaboration with Abram Demski, a MIRI researcher on evidential decision theory). Would be keen to hear if you end up in Australia.
Chris Leong
4 months ago
I would be really excited to see the establishment of an AI safety lab at Oxford as this would help establish the credibility of the field which is one of the core problems holding alignment research back.
That said, I suspect that a proper research direction is crucial when establishing a new lab as its important to lead people down promising paths. I haven’t evaluated their proposed directions in detail, so I would encourage anyone considering donating large amounts of money to do so themselves.
Disclaimer: Fazl and I were discussing collaborating on movement building in the past.