Excessive Optimism About Far Future Causes
In my recent post on cause selection, I constructed a model where I broke down by category all the charities REG has raised money for and gave each category a weight based on how much good I thought it did. I put a weight of 1 on my favorite object-level charity (MIRI) and gave other categories weights proportional to that. I put GiveWell-recommended charities at a weight of 0.1–that means I’m about indifferent between a donation of $1 to MIRI and $10 to the Against Malaria Foundation (AMF).
Buck criticized my model, claiming that my top charity, MIRI, is more than ten times better than AMF and I’m being too conservative. But I believe that this degree of conservatism is appropriate, and a substantially larger ratio would be epistemically immodest.
High Uncertainty
I wouldn’t be too surprised if MIRI turned out to have a positive impact that’s 1000 times greater than AMF. But I also wouldn’t be surprised if AMF turned out to be 1000 times better, or if some other charity like Animal Charity Evaluators were much better than either. My current best guess is that MIRI is the most effective of these three, and if it’s more effective then it’s probably by a large margin. But the conclusion that MIRI is much better than any other charity requires a long chain of reasoning:
- Effects on the far future are way more important than immediate impact.
- Reducing existential risk is more important than values spreading.
- Current organizations working on existential risk can actually have an effect.
- AI is a sufficiently important existential risk that there aren’t any x-risk organizations in other sectors that represent better giving opportunities.
- Among existential risk charities, MIRI has the best impact after accounting for room for more funding.
I believe that each of these is true, but it’s a long and complex chain of reasoning. I would not accept a bet at 3:1 odds that I’ll be just as bullish on MIRI a year from now as I am today.
The fourth and fifth steps listed here only discuss how strong MIRI is relative to other existential risk organizations. While I am somewhat more confident that MIRI is better than AMF than that MIRI is the best x-risk organization, I’m not highly confident about either.
Weakness of Expected Value Estimates
People have a history of making absurdly over-optimistic expected value estimates, even when they try to make their estimates conservative. If it appears to me that MIRI is 1000 times better than AMF, I have good reason to be skeptical of my own estimate. Outside view says my estimate is probably too high. (Here we could reference something like Hofstadter’s law: your expected value estimate is too optimistic, even after considering the fact that your expected value estimate is too optimistic.)
We have considerable uncertainty about the expected value of donations to AMF. But GiveWell has extensively investigated AMF, and our estimate of its cost-effectiveness is much more robust. AMF has a lot of evidence moving it away from the prior; MIRI has much less evidence, so I’m wary of being overconfident.
Adjusting for Others’ Beliefs
Other smart people who I respect, like Peter Hurford and Jeff Kaufman, have a lot of skepticism about speculative causes with weaker evidence. While I disagree with them to some extent, and I believe we can be sufficiently confident about some far-future charities, their positions should put limits on how confident I can be. Peter and Jeff are intelligent and rational and share my values and agree with me about lots of non-obvious things; while they might be wrong, I can’t be too confident that they’re wrong.