Hey Nuhu, this project seems pretty off-topic from the Manifold Community Fund, which is only funding projects related to Manifold Markets. Is it okay if I change this to a normal grant application and remove that tag?
@Rachel
Building this site! Ran EA @ Tufts, where I also studied math.
https://www.linkedin.com/in/rachel-weinberg-789b23228/This is a donation to this user's regranting budget, which is not withdrawable.
$2,500 in pending offers
I’m currently working full-time on Manifund as an engineer. Previously, I completed half a math degree, founded and ran my college EA group, and did operations for small-to-medium EA events. I have no prior grant-making experience, and chose to serve as a regrantor for the sake of dogfooding. Still, I hope I can get some good things funded!
I think the most important thing in the world right now is making sure the development of advanced AI goes as well as possible, so I plan to fund projects to that end. In recent months I’ve updated towards both thinking that AI governance is more neglected (based on conversations with funders) and more tractable (based on the public reaction to AI improvements) than I had previously thought, so I’m especially excited about funding governance projects! If I don’t find anything better to do with my money, I might just give it to GovAI or something.
I’m also interested in meta stuff, hence working on Manifund, and accelerating and guiding talented people towards working on the most important problems. Since I don’t have tons of object-level ML knowledge, I may feel more qualified to fund more general things like this.
Rachel Weinberg
10 months ago
Hey Nuhu, this project seems pretty off-topic from the Manifold Community Fund, which is only funding projects related to Manifold Markets. Is it okay if I change this to a normal grant application and remove that tag?
Rachel Weinberg
10 months ago
Hey Paul, unfortunately Manifund won’t be able to fulfill this grant at this time. An unexpected influx of year-end regrants spent down the total pot of regrantor funding, meaning that we don't have enough left to fund a few of the last projects (like this one).
We’re so sorry if this created false expectations. Best of luck applying for funding elsewhere—hopefully Dan’s enthusiasm and support for your project will be of help, even if he couldn’t give you a grant directly.
Rachel Weinberg
10 months ago
Hey Peter, unfortunately Manifund won’t be able to fulfill this grant at this time. An unexpected influx of year-end regrants spent down the total pot of regrantor funding, meaning that we don't have enough left to fund a few of the last projects (like this one).
We’re so sorry if this created false expectations. Best of luck applying for funding elsewhere—hopefully Dan’s enthusiasm and support for your project will be of help, even if he couldn’t give you a grant directly.
Rachel Weinberg
10 months ago
Hey Alexander, I’m approving this project since research like this definitely falls within Manifund’s scope. However, I have to lower the amount from $280K to $190K, since that’s all that remains from the total pot allocated to regrantors. We’re really sorry for giving false expectations about how much money you’d be getting.
Rachel Weinberg
about 1 year ago
@NcyRocks oh haha oops, I didn't actually check, just assumed based on what you guys had said 😳👍
Rachel Weinberg
about 1 year ago
Okay I've now unwrapped (N/A'd in Manifold speak, though we don't exactly have this functionality) this project as requested by the founders, and @NcyRocks has created a new proposal with a higher minimum which people who invested here ( @MarcusAbramovitch, @Austin ) can invest in instead. It'll now be hidden from the front page as well.
Rachel Weinberg
about 1 year ago
This looks pretty cool! How much engagement has aisafety.info gotten? I'd be curious about concrete analytics, like total unique visitors, average time spent, etc. And more qualitatively, what types of users does this attract and how does it help them? Like, is the theory of change similar to Rob Miles' channel, just an alternative interface/entry point, or is it useful to a different class of people or in a different way?
Relatedly how has it been promoted/how do you plan to promote it?
Rachel Weinberg
about 1 year ago
@Linch btw you need to sign the grant agreement before the donations go through (and you'll still be able to accept donations after that's done)
Rachel Weinberg
about 1 year ago
@vandemonian btw you have to sign the grant agreement before you'll get the grant
Rachel Weinberg
about 1 year ago
@vincentweisser @pegahbyte @jajsmith after talking to Jonas, I moved this project back into the proposal stage and lowered the minimum funding to $500. You probably got an email before that your donation offers were declined because this project hadn't reached the minimum funding, but now that won't be true. Let me know if you want to delete your offers for any reason, but otherwise, as soon as we approve this project your donations will go through as though the minimum had been $500 instead of $1000 originally.
Rachel Weinberg
about 1 year ago
@mapmeld @vincentweisser @esbenkran @JoshuaDavid since Evan withdrew his donation which put this project back below the minimum funding bar, I put this project back in the proposal stage and undid your transactions. Let me know if any of you want to withdraw your offers to. Otherwise they'll only go through if/when this reaches its minimum funding bar ($1k) again.
Rachel Weinberg
about 1 year ago
@apollo btw you need to sign the grant agreement before the money goes to your account and you can withdraw. You can access it from the very top of the project page.
Rachel Weinberg
over 1 year ago
Approving this project! As a 501c3 we can fund only a small amount of lobbying, and we're excited to put some of that towards an AI moratorium. This does seem like the type of thing that could have downside risk but the upside is also massive and Holly seems to have thought really carefully about the risks and isn't planning to do huge things just yet.
Note that Holly signed a different grant agreement from the default one so she didn't have to agree not to try to influence legislation.
Rachel Weinberg
over 1 year ago
Note on Manifund funding projects trying to influence legislation: based on this from the IRS, Manifund should be able to give about $225,000 to lobbying over the next 4.5 months, or ~$125,000 at the total we've currently spent. About $5k has already been offered to Holly for her AI moratorium proposal, which has yet to be approved but probably will be once we figure out an alternative grant agreement for her that doesn't make her promise to not try to influence legislation.
That's to say, we can't fully fund this unless we raise a lot more money, but we could partially fund it. Also flagging to regrantors that this has to clear some special lobbying bar, because funding this comes out of our lobbying funding pool specifically which is much smaller than the ~$1.9M we have total.
Rachel Weinberg
over 1 year ago
@AaronSilverbook oh I should have clarified, this is different than the SAFE agreement, I meant just the standard grant agreement that everyone signs (or rather, checks a box on). It's linked at the top of this project page.
Rachel Weinberg
over 1 year ago
Approving this!
The plan is for the $40k from Austin and Isaak to go in as an investment, where profits go to their regranting budgets. Contributions from other users will go through as regular donations.
Now all that's left is for @AaronSilverbook to sign the agreement.
Rachel Weinberg
over 1 year ago
I’ll match donations on this project 10% up to $200k.
(I’ve been considering some Dominant Assurance Contract scheme because this project looks good and I know that people are interested in funding it, but with straight crowdfunding like we have on Manifund right now people are incentivized to play funding chicken, whereas DACs make contributing below the minimum funding a dominant strategy. On the other hand a DAC isn’t very powerful here because it would only incentivize funding up to the minimum bar, and the minimum is so low. I’d still be down to do that if you (@jesse_hoogland) were willing to raise the minimum funding bar to, say, $150k, but this risks you not getting the funding at all.)
Rachel Weinberg
over 1 year ago
By the way @apollo I edited your profile to include your name and a username that's not a long uuid—there was a bug in the sign-in flow at the time that you made your account so it may not have prompted you to do that. Feel free to change those, I just thought it looked bad to have them blank.
Rachel Weinberg
over 1 year ago
@K-Tiffany I see how quoting that could have come off kind of mean, though I don't think Austin was intending to be mean or trolling—I think that passage is just parodying a very extreme version of a problem that's common in grant proposals, though I do agree with Austin that it's present here to some extent.
A really concrete thing is that I'd like to know in the very first sentence of the summary what the product literally does. Later in that paragraph its kind of explained, but I'm still not sure exactly what "formal models" means in this context: can you give some examples of what's being modeled?
More broadly, the style at times feels marketing-speak-y, which I think most VCs and grantmakers are wary of. For example, the second sentence uses lots of big/rare words where simpler ones would suffice. I think it's trying to say that "formal models help people understand each others' point of view," and the fancy language makes me both not understand it as quickly and feel like I'm being manipulated or something. Or from the next sentence, what does "peer-to-peer network" mean? Why not just say they're sharing the models with other people?
My general advice:
be really concrete
use fewer and shorter words when possible
lead with the most important things I need to know to evaluate the project
Also here's a comment I left on someone else's proposal with overlapping advice which you might find helpful.
Rachel Weinberg
over 1 year ago
Giving this admin approval because it's legal for us to fund, within Manifund's scope, and low downside.
Some of my personal thoughts: the cheapness + concreteness of this is pretty cool, and it seems like the type of this Manifund is particularly well-suited to fund, since it’s such a small ask and you need funding fast! Also didn’t know Taco Bell gave scholarships. That is wild and awesome.
Since I’ve been reading a bunch of these proposals lately, I have some advice on how you could improve the writeup*, which I mostly wrote before Austin funded it but I figure why not share anyway:
The project summary is the first thing people see, and should be short (1 paragraph) and to the point about what you’re actually going to do: your background is important (and something people often under include!) but first I want to know what you’re going to do, what it will accomplish, and maybe why you think that matters. You don’t start talking about what the app does until about the third paragraph, and even then there’s not a single sentence that’s a really clear summary of it. That should be your first sentence.
Then include your background in the track record section. Also btw, like Notion, you can highlight text and then paste a link to create a hyperlink, and type dash + space to create bullet points. Seems useful for formatting the list of past projects.
Maybe I’m particularly clueless, but as someone with no hardware experience, the connection between the solar systems/batteries and your app isn’t obvious to me. What role do the phones pay in the solar cars? Assume your audience is clueless about technical details and spell out in plain terms the connection between what you’re doing and the final product.
*Mostly this is stuff that the I/the UI should have made more clear in the first place, like what exactly the prompts mean and what formatting the editor allows. Also thinking I should write up a grant-writing advice doc and link it from the create project page.
Rachel Weinberg
over 1 year ago
There's a cost-effectiveness analysis of this project on the EA Forum!
Rachel Weinberg
over 1 year ago
Raised the funding goal again to $58k, since that's the maximum amount that Rachel thinks she could use productively.
Rachel Weinberg
over 1 year ago
@Austin I know we don't require explanations for donations where the grantmaker didn't add the project themselves (though maybe we should, or at least prompt for it)—still, want to do a short writeup on what made you excited to fund this? And why you offered over the funding target?
Rachel Weinberg
over 1 year ago
Noting that I raised the funding goal from $34k to $43.6k to account for the ~22% tax that Rachel will have to pay on this grant.
Rachel Weinberg
over 1 year ago
At first glance when trying to foster skepticism I had the same thought as you: that teams and mentorship make people more productive, so this grant could be a push in the wrong direction. On the other hand, he's been unusually successful so far as an independent researcher. If he's particularly well-suited to working independently, which most people struggle with, that's a kind of comparative advantage it might make sense to lean into since mentorship and spots on established teams are in short supply.
Rachel Weinberg
over 1 year ago
Approving this grant on behalf of Manifund! This project is definitely within scope and Gabe & his team are legit.
Rachel Weinberg
over 1 year ago
Rachel’s work looks really valuable to me. I don’t have strong inside views on technical AI safety research paths, but there are a bunch of positive signals: she’s at a top PhD program, so unlike many other TAIS researchers, she has received high quality mentorship and works with strong teams. She’s published papers with other well-regarded researchers, some of which have gotten into top journals, and others have won safety-specific rewards. She also works with interns at CHAI and holds office hours for aspiring AI safety researchers, which were so successful that 80k began to fund and recommend them. The world is really short on good AI safety researchers, and possibly even more short on good AI safety researchers who can offer mentorship to produce more good AI safety researchers. Rachel is both.
She’s also extremely under compensated for this work at the moment. Her monthly term-time take-home income in the past year was about $2,800, and she lives in a high-cost area. Her lab cannot offer her higher compensation or better insurance. This type of work is easily worth a 6-figure salary, and probably worth a 7-figure salary, so paying $13,000 for a ~5% increase in productivity over a year seems like a good price.
None really. In the worst case, it makes no impact on her work but helps her personally, which both seems unlikely and not that bad relative to the potential downsides of other longtermist grants. In the best case this makes the difference between a good researcher staying in what I think is the most important field, and her leaving it.
I initially offered $10k to cover her baseline medical expenses, in hopes that other donors would chip in a bit more as well. Then, after some discussion with Rachel about the costs that transparency imposed on her and the positive externalities I think it produces, we settled on $13k because this grant and her identity are public.
Mostly I didn’t offer $15-30k initially just because I only have $50k total to give out over 6 months and it’s been a week.
None.
Rachel Weinberg
over 1 year ago
For one, people in EA have asked for a compilation of donation data, and some have done it for their own specific purposes, and when I built a UI on top of it that made the data easier to comb through, I got a positive response on the EA forum. Putting aside for a moment what people actually want to do with the data, just them asking for it is a decent signal that it’s useful.
In practice, easier access to data about where funding is going in EA (and in some cases, why) makes people’s beliefs more accurate which makes their actions more effective (e.g. Vipul donated to the Animal Welfare because the data indicated it was most neglected, OpenBook caused Eliezer Yudkowsky to publicly change his mind about Slime Mold Time Mold). Plus sometimes people don’t know where to apply for funding, and this data helps people figure out how similar projects have gotten funded. Finally, it might be useful to grant makers because they can see how projects they’re considering giving a grant to have been funded in the past.
Since this is providing retroactive funding, it’s not directly causing work to happen that would not have happened otherwise. Still, I think it would be a good community norm if people were rewarded for doing cool projects after the fact, because it creates better future incentives and lets people know concretely that their contribution was valued.
I’m impressed and grateful for their time gathering the data, but I think almost all of the value is left on the table if the data isn’t made easier to interact with and analyzed and aggregated. That remaining work could be done by Vipul or by someone else, I just think it would be a loss if the project ended here.
Since this is retro funding, it’s less anchored on the precise costs of the project. It feels like a reasonable amount: it’s probably under-compensation in terms of hourly rate, but it’s enough to feel like a significant reward for work well done.
None.
Rachel Weinberg
over 1 year ago
I think the point that LLMs, even with their current capabilities, make malicious attacks cheaper and more accessible to people outside of the government makes a lot of sense. I'd be curious to see a more expansive explanation of how you expect this to be destabilizing and how this affects your probability estimates of existential risk/what it means we (or more precisely, Open Philanthropy) should be doing differently.
Basically, in your opening you say you're going to address the probability of existential risk through a national security lens, and throughout the essay you talk about AI and national security (which may well be a neglected and useful perspective, I don't see much of that from the AI safety community!), but the conclusion on existential risk is basically "this will be destabilizing to all of human activity", which feels a bit unsatisfying?
(I'd probably take you up on your sell offer if the conclusion addressed this more concretely)
Rachel Weinberg
over 1 year ago
By the way, I changed your minimum funding to $720 as we discussed on the call, so now you own 10% at a valuation of $800 instead of $889.
Rachel Weinberg
over 1 year ago
Arjun is this a web app or what? You say platform but then don't really explain. Also what are you spending the money on?
For | Date | Type | Amount |
---|---|---|---|
Manifund Bank | 3 months ago | deposit | +600 |
Manifund Bank | 7 months ago | return bank funds | 21350 |
Manifund Bank | 11 months ago | deposit | +10 |
Manifold Tournaments | 12 months ago | user to user trade | 1 |
Mirrorbot | 12 months ago | user to user trade | 1 |
Scoping Developmental Interpretability | about 1 year ago | project donation | 3000 |
Scoping Developmental Interpretability | over 1 year ago | project donation | 10150 |
Manifund Bank | over 1 year ago | withdraw | 10 |
Manifund Bank | over 1 year ago | deposit | +10 |
Medical Expenses for CHAI PhD Student | over 1 year ago | project donation | 13000 |
Manifund Bank | over 1 year ago | deposit | +50000 |
<7fb3708a-3bd7-4b11-b2c4-ea19c1a555d4> | over 1 year ago | profile donation | 10 |
Manifund Bank | over 1 year ago | deposit | +10 |
CHAT Diplomacy | over 1 year ago | user to user trade | 50 |
Why I think strong general AI is coming soon | over 1 year ago | user to user trade | 60 |
Manifund Bank | over 1 year ago | deposit | +10 |
Manifund Bank | over 1 year ago | deposit | +100 |