Cross-posted to the Effective Altruism Forum.
Summary: We should reduce existential risk in the long term, not merely over the next century. We might best do this by developing longtermist institutions1 that will operate to keep existential risk persistently low.
Civilization could continue to exist for billions (or even trillions) of years. To achieve our full potential, we must avoid existential catastrophe not just this century, but in all centuries to come. Most work on x-risk focuses on near-term risks, and might not do much to help over long time horizons. Longtermist institutional reform could ensure civilization continues to prioritize x-risk reduction well into the future.
This argument depends on three key assumptions, which I will justify in this essay:
- The long-term probability of existential catastrophe matters more than the short-term probability.
- Most efforts to reduce x-risk will probably only have an effect on the short term.
- Longtermist institutional reform has a better chance of permanently reducing x-risk.
For the sake of keeping this essay short, I will gloss over a lot of complexity and potential caveats. Suffice it to say that this essay’s thesis depends on a lot of assumptions, and I’m not convinced that they’re all true. This essay is intended more as a conversation-starter than a rigorous analysis.
Long-term x-risk matters more than short-term risk
People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the more valuable it is to further reduce that risk.
Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.
Perhaps we expect x-risk to substantially decline in future centuries. In that case, given the choice between reducing x-risk this century and reducing it in the future, we prefer to reduce it in the future.
This argument depends on certain specific claims about how x-risk reduction works. But the basic result (that we care more about x-risk in the long term than in the short term) holds up across a variety of assumptions. See Sittler (2018) section 3 for a more rigorous justification and an explanation of under which precise conditions this result holds.3
Current x-risk reduction efforts might only work in the short term
If we look at current efforts to reduce the probability of existential catastrophe, it seems like most of them will only have relatively short-term effects. For example, nuclear disarmament treaties probably reduce x-risk. But treaties don’t last forever. We should expect disarmament treaties to break down over time. Most efforts to reduce x-risk seem like this: they will reduce risk temporarily, but their effects will diminish over the next few decades. (AI safety might be an exception to this if a friendly AI can be expected to minimize all-cause existential risk.)
It seems likely to me that most x-risk reduction efforts will only work temporarily. That said, this belief is more intuitive than empirical, and I do not have a strong justification for it (and I’d only put maybe 60% confidence in this belief). Other people might reasonably disagree.
Longtermist institutional reform could permanently reduce x-risk
John & MacAskill’s recent paper Longtermist Institutional Reform proposes developing institutions with incentives that will ensure the welfare of future generations. The notion of long-term vs. short-term existential risk appears to provide a compelling argument for prioritizing longtermist institutional reform over x-risk reduction.
The specific institutional changes proposed by John & MacAskill might not necessarily help reduce long-term x-risk. But the general strategy of longtermist institutional reform looks promising. If we can develop stable and rational longtermist institutions, those institutions will put effort into reducing existential risk, and will continue doing so into the long-term future. This seems like one of the most compelling ways for us to reduce long-term x-risk. And as discussed in the previous sections, this probably matters more than reducing x-risk in the short term.
I have argued that we might want to prioritize longtermist institutional reform over short-term existential risk reduction. This result might not hold up if:
- In future centuries, civilization will reduce x-risk to such a low rate that it will become too difficult to reduce any further.
- Short-term x-risk reduction efforts can permanently reduce risk, more so than longtermist institutional reform would.
- Longtermist institutional reform is too intractable.
Maybe some other intervention would do a better job of mitigating long-term x-risk—for example, reducing risks from malevolent actors. Or we could work on improving decision-making in general. Or we might simply prefer to invest our money to be spent by future generations.
Subjective probability estimates
The EA community generally underrates the significance of long-term x-risk reduction: 3 in 4
Marginal work on (explicit) long-term x-risk reduction is more cost-effective than marginal work on short-term x-risk reduction: 1 in 3
Longtermist institutional reform is the best way to explicitly reduce long-term x-risk: 1 in 3
Thanks to Sofia Fogel, Ozzie Gooen, and Kieran Greig for providing feedback on this essay.
John, T. & MacAskill, W. (2020, forthcoming). Longtermist Institutional Reform. In Natalie Cargill (ed.), The Long View. London, UK: FIRST. ↩
Sittler, T. (2018). The expected value of the long-term future. ↩
Using Sittler’s model in section 3.1.1, “Diminishing returns on risk reduction”, under the assumptions that (1) one can only reduce x-risk for the current century and (2) efforts can reduce x-risk by an amount proportional to the current risk, it follows that x-risk reduction efforts are equally valuable regardless of the level of x-risk. Therefore, it’s better to reduce x-risk in future centuries than now, because you can invest your money at a positive rate of return and thus spend more on x-risk reduction in the future than you can now. ↩