NunoSempere avatar
Nuño Sempere

@NunoSempere

regrantor

Researcher & forecaster

https://nunosempere.com
$25,000total balance
$25,000charity balance
$0cash balance

$0 in pending offers

About Me

I don't yet know what I will do with this money. Some threads that I am considering:

  • Grants whose appeal other funding sources can't understand.

  • Thiel-style funding: Grants to formidable people outside the EA community for doing things that they are intrinsically motivated to do and which might have a large positive effect on the world.

  • Targetted grants in the forecasting sphere, particularly around more experimentation.

  • Giving a large chunk of it to Riesgos Catastróficos Globales (https://riesgoscatastroficosglobales.com/) in particular.

  • Bets of the form "I am quite skeptical that you would do [some difficult thing], but if you do, happy for you to take my money, and otherwise I will take yours".

  • Bounties: like the above, but less adversarial, because you do get the amount if you succeed, but don't lose anything if you don't.

  • Non-AI longtermism.

  • Grants to the Spanish and German-speaking communities.

I am also considering doing things a bit different from what the current EA ecosystem currently does, just for the information value. For example:

  • Giving feedback at depth on applications that people pitch me on

    • The rationale is that this feedback could improve people's later career paths. I think that other funding orgs don't do this because they get overwhelmed with applications. But I'm not overwhelmed at the moment!

  • Putting bounties on people referring applications

  • Using Manifold prediction markets on the success of grants as a factor for evaluation

    • Requires grantees willing to be more transparent, though

That said, I'm generally seeking to do "the optimal thing", so if I get some opportunity that I think is excellent I'll take it, even if it doesn't fall into the above buckets.

Also, I guess that $50k is not that large an amount, so I'm either going to have to be fairly strategic, or get more money :)

As for myself personally, I'm maybe best known for starting Samotsvety Forecasting (samotsvety.org/)—an excellent forecasting team, for being a prolific EA Forum poster (https://forum.effectivealtruism.org/users/nunosempere?sortedBy=top)—since emigrated to nunosempere.com/blog, or for having done some work at the Quantified Uncertainty Research Institute on topics of estimation and evaluation.

Outgoing donations

Comments

NunoSempere avatar

Nuño Sempere

about 4 hours ago

@vandemonian Are you currently constrained by more funding? Do you have the capacity to put in more effort if you get more funding?

NunoSempere avatar

Nuño Sempere

about 1 month ago

Funding this. I like the lumenator part, but I particularly like the more ambitious life trajectory point.

On your application, you mention:

returning the money left if I decided that this was not a good idea anymore

Please consider not doing this; rather, please either pivot to a better opportunity or keep it until a good opportunity arises.

NunoSempere avatar

Nuño Sempere

about 1 month ago

Overall I don't really understand the biosecurity ecosystem or how this would fit in, so I'm thinking I'm probably a bad funder here. Still, some questions:

  • Do you already have some decision-makers who could use these estimates to make different decisions?

  • How valuable do you think that this project is without the long covid estimate?

  • Who is actually doing this work? Vivian and Richard, or Joel and Aron?

  • Why are you doing stuff $3.6k a time, rather than having set up some larger project with existing biosecurity grantmakers?

NunoSempere avatar

Nuño Sempere

about 2 months ago

Could you say a bit more about why this beats your counterfactual?

NunoSempere avatar

Nuño Sempere

about 2 months ago

I have too many conflicts of interest to fund this myself, but here are some thoughts:

I like thinking of Nathan's work in terms of the running theme of helping communities arrive at better beliefs, collectively. And figuring out how to make that happen.

On the value of that line of work:

- I have a pretty strong aversion to doing that work myself. I think that it's difficult to do and requires a bunch of finesse and patience that I lack.

- I buy that it's potentially very valuable. Otherwise, you end with a Cassandra situation, where those who have the best models can't communicate them to others. Or you get top-down decisions, where a small group arrives an opinion and transmits it from on high. Or you get various more complex problems, where different people in a community have different perspectives on a topic, and they don't get integrated well.

- I think a bottleneck on my previous job, at the Quantified Uncertainty Research Institute, was to not take into account this social dimension and put too much emphasis on technical aspects.

One thing Nathan didn't mention is that estimaker, viewpoints and his podcast can feed on each other: e.g., he has interviewed a bunch of people and got them to make quantified models about AI using estimaker: (Katja Grace: https://www.youtube.com/watch?v=Zum2QTaByeo&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=8, Rohit Krishnan: https://www.youtube.com/watch?v=cqCYMgEnP7E&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=10, Garett Jones: https://www.youtube.com/watch?v=FSM94rmJUAU&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=4, Aditya Prasad: https://www.youtube.com/watch?v=rwTb7VgSZKU&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=6). This plausibly seems like a better way forward than the MIRI conversations https://www.lesswrong.com/s/n945eovrA3oDueqtq.

Generally, you could imagine an interesting loop: viewpoint elicitation surfaces disagreements => representatives of each faction make quantified models => some process explains the quantified models to a public => you do an adversarial collaboration on the quantified models, parametrizing unresolvable disagreements so that members of the public can input their values but otherwise reuse the model.

I see reason to be excited about epistemic social technology like that, and about having someone like Nathan figure things out in this space.

NunoSempere avatar

Nuño Sempere

2 months ago

I think that RCG's object-level work is somewhat valuable, and also that they could greatly contribute to making the Spanish and Latin-American EA community become stronger. I think one could make an argument that this doesn't exceed some funding bar, but ultimately it doesn't go through.