Summary: The most dangerous existential risks appear to be the ones that we only became aware of recently. As technology advances, new existential risks appear. Extrapolating this trend, there might exist even worse risks that we haven’t discovered yet.

Epistemic status: From the inside view, I find the core argument compelling, but it involves sufficiently complicated considerations that I’m not more than 50% confident in its correctness. In this essay, if I claim that something is true, what I really mean is that it’s true from the perspective of a particular argument, not necessarily that I believe it.

Cross-posted to the Effective Altruism Forum.

Contents

Unknown existential risks

Humanity has existed for hundreds of thousands of years, and civilization for about ten thousand. During this time, as far as we can tell, humanity faced a relatively low probability of existential catastrophe (far less than a 1% chance per century). But more recently, while technological growth has offered great benefits, it has also introduced a new class of risks to the future of civilization, such as the possibilities of nuclear war and catastrophic climate change. And some existential risks have been conceived of but aren’t yet possible, such as self-replicating nanotechnology and superintelligent AI.

In his book The Precipice, Toby Ord classifies existential risks into three categories: natural, anthropogenic, and future. He provides the following list of risks:

  • Natural Risks
    • Asteriods & Comets
    • Supervolcanic Eruptions
    • Stellar Explosions
  • Anthropogenic Risks
    • Nuclear Weapons
    • Climate Change
    • Environmental Damage
  • Future Risks
    • Pandemics
    • Unaligned Artificial Intelligence
    • Dystopian Scenarios

Additionally, he provides his subjective probability estimates that each type of event will occur and result in an existential catastrophe within the next century:

Existential catastrophe Chance within next 100 years
Asteroid or comet impact ∼ 1 in 1,000,000
Supervolcanic eruption ∼ 1 in 10,000
Stellar explosion ∼ 1 in 1,000,000,000
Nuclear war ∼ 1 in 1,000
Climate change ∼ 1 in 1,000
Other environmental damage ∼ 1 in 1,000
“Naturally” arising pandemics ∼ 1 in 10,000
Engineered pandemics ∼ 1 in 30
Unaligned artificial intelligence ∼ 1 in 10
Unforeseen anthropogenic risks ∼ 1 in 30
Other anthropogenic risks ∼ 1 in 50

Obviously, these estimates depend on complicated assumptions and people can reasonably disagree about the numbers, but I believe we can agree that they are at least qualitatively correct (e.g., asteroid/comet impacts pose relatively low existential risk, and engineered pandemics look relatively dangerous).

An interesting pattern emerges: the naturally-caused existential catastrophes have the lowest probability, anthropogenic causes appear riskier, and future causes look riskier still. We can also see that the more recently-discovered risks tend to pose a greater threat:

Imagine if the scientific establishment of 1930 had been asked to compile a list of the existential risks humanity would face over the following hundred years. They would have missed most of the risks covered in this book—especially the anthropogenic risks. [Footnote:] Nuclear weapons would not have made the list, as fission was only discovered in 1938. Nor would engineered pandemics, as genetic engineering was first demonstrated in the 1960s. The computer hadn’t yet been invented, and it wasn’t until the 1950s that the idea of artificial intelligence, and its associated risks, received serious discussion from scientists. The possibility of anthropogenic global warming can be traced back to 1896, but the hypothesis only began to receive support in the 1960s, and was only widely recognized as a risk in the 1980s.1

In other words:

  • Natural risks that have been present for all of civilization’s history do not pose much threat.
  • Risks that only emerged in the 20th century appear more likely.
  • The likeliest risks are those that cannot occur with present-day technology, but might occur within the next century.

As technology improves, the probability of an existential catastrophe increases. If we extrapolate this trend, we can expect to discover even more dangerous risks that as-yet-unknown future technologies will enable. As Ord wrote, 100 years ago, the scientific community had not conceived of most of the risks that we would now consider the most significant. Perhaps in 100 years’ time, technological advances will enable much more significant risks that we cannot think of today.

Or perhaps there exist existential risks that are possible today, but that we haven’t yet considered. We developed nuclear weapons in 1945, but it was not until almost 40 years later that we realized their use could lead to a nuclear winter.2 We might already have the power to cause an existential catastrophe via some mechanism not on Toby Ord’s list; and that mechanism might be easier to trigger, or more likely to occur, than any of the ones we know about.

If we accept this line of reasoning, then looking only at known risks might lead us to substantially underestimate the probability of an existential catastrophe.

Even more worryingly, existential risk might continue increasing indefinitely until an existential catastrophe occurs. If technological growth enables greater risk, and technology continues improving, existential risk will continue increasing as well.3 Improved technology can also help us reduce risk, and we can hope that the development of beneficial technologies will outpace that of harmful ones. But a naive extrapolation from history does not present an optimistic outlook.

Types of unknown risk

We can make a distinction between two types of unknown risk:

  1. Currently-possible risks that we haven’t thought of
  2. Not-yet-possible risks that will become possible with future technology

The existence of the first type of risk leads us to conclude that we face a higher probability of imminent existential catastrophe than we might otherwise think. The second type doesn’t affect our beliefs about existential risk in the near term, but it does suggest that we should be more concerned about x-risks over the next century or longer.

We shouldn’t necessarily respond to these two types of unknown risks in the same way. For example: To deal with currently-possible unknown risks, we could spend more effort thinking about possible sources of risk, but this strategy probably wouldn’t help us predict x-risks that depend on future technology.

Why unknown risks might not matter so much

In this section, I will present a few arguments that unknown risks don’t matter as much as the previous reasoning might suggest (in no particular order). Of these new arguments, the only one I find compelling is the “AI matters most” argument, although this one involves sufficiently complex considerations that I do not feel confident about it.

Argument 1: AI matters most

We have some reason to expect superintelligent AI in particular to pose a greater risk than any unknown future technology. If we do develop superintelligent AI, humans will no longer be the most intelligent creatures on the planet. Intelligence has been the driving factor allowing humans to achieve dominance over the world’s resources, so we can reasonably expect that a sufficiently intelligent AI would be able to gain control over humanity. If we no longer control our destiny, then on priors, we should not expect a particularly high probability that we realize a positive future4.

Arguably, unknown risks cannot pose the same level of threat because they will not change who controls the future.

(On the other hand, there could conceivably exist some unknown but even more important consideration than who controls the future, and that if we thought of this consideration, we would realize that it matters more than superintelligent AI.)

A sufficiently powerful friendly AI might be able to fix all anthropogenic existential risks and maybe even some natural ones, reducing the probability of existential catastrophe to near zero—thus rendering unknown risks irrelevant. On the other hand, perhaps future technology will introduce some existential risks that not even a superintelligent AI can foresee or mitigate.

Argument 2: There are no unknown risks

Perhaps one could argue that we have already discovered the most important risks, and that our uncertainty only lies in how exactly those risks could lead to existential catastrophe. (For instance, severe climate change could result in unexpected outcomes that hurt civilization much more than we anticipated.) On the outside view, I tend to disagree with this argument, based on the fact that we have continued to discover new existential risks throughout the past century. But maybe upon further investigation, this argument would seem more compelling. Perhaps one could create a comprehensive taxonomy of plausible existential risks and show that the known risks fully cover the taxonomy5.

Argument 3: New risks only look riskier due to bad priors

In general, the more recently we discovered a particular existential risk, the more probable it appears. I proposed that this pattern occurs because technological growth introduces increasingly-significant risks. But we have an alternative explanation: Perhaps all existential risks are unlikely, but the recently-discovered risks appear more likely due to bad priors plus wide error bars in our probability estimates.

Toby Ord alludes to this argument in his 80,000 Hours interview:

Perhaps you were thinking about asteroids and how big they were and how violent the impacts would be and think, “Well, that seems a very plausible way we could go extinct”. But then, once you update on the fact that we’ve lasted 2000 centuries without being hit by an asteroid or anything like that, that lowers the probability of those, whereas we don’t have a similar way of lowering the probabilities from other things, from just what they might appear to be at at first glance.

If we knew much less about the history of asteroid impacts, we might assign a much higher probability to an existential catastrophe due to asteroid impact. More generally, it’s possible that we systematically assign way too high a prior probability to existential risks that aren’t well-understood, and then end up revising these probabilities downward as we learn more.

If true, this line of reasoning means not only should we not worry about unknown risks, but we probably shouldn’t even worry about known risks, because we systematically overestimate their likelihood.

Will MacAskill says something similar in “Are we living at the most influential time in history?”, where he argues that we should assume a low base rate of existential catastrophe, and that we don’t have good enough reason to expect the present to look substantially different than history.

I don’t find this argument particularly compelling. Civilization today has unprecedented technology and globalization, so I do not believe most of human history serves as a useful reference class. Additionally, we have good reason to believe that certain types of existential catastrophes have a reasonably high probability of occurring. For example, we know that the United States and the Soviet Union have almost started a nuclear war more than once6. And while a nuclear war doesn’t necessarily entail an existential catastrophe, it definitely makes such a catastrophe dramatically more likely.

(Strictly speaking, MacAskill does not necessarily claim that the current rate of existential risk is low, only that we do not live in a particularly influential time. His argument is consistent with the claim that existential risk increases over time, and will continue to increase in the future.)

Argument 4: Existential risk will decrease in the future due to deliberate efforts

If the importance of x-risk becomes more widely recognized and civilization devotes much more effort to it in the future, this would probably reduce risk across all causes. If we expect this to happen within the next 50-100 years, that would suggest we currently live in the most dangerous period. In previous centuries, we faced lower existential risk; in future centuries, we will take better efforts to reduce x-risk. And if we believe that unknown risks primarily come from future technologies, then by the time those technologies emerge, we will have stronger x-risk protection measures in place. (Toby Ord claims in his 80,000 Hours interview that this is the main reason why he didn’t assign a higher probability to unknown risks.)

This argument seems reasonable, but it’s not necessarily relevant to cause prioritization. If we expect that deliberate efforts to reduce x-risk will likely come into play before new and currently-unknown x-risks emerge, that doesn’t mean we should deprioritize unknown risks. Our actions today to prioritize unknown risks could be the very reason that such risks will not seriously threaten future civilization.

Do we underestimate unknown x-risks?

  1. Toby Ord estimates that unforeseen risks have a 1 in 30 chance of causing an existential catastrophe in the next century. He gives the same probability to engineered pandemics, a higher probability to AI, and a lower probability to anything else.
  2. Pamlin & Armstrong (2015)7 (p. 166) estimate a 0.1% chance of existential catastrophe due to “unknown consequences” in the next 100 years. They give unknown consequences an order of magnitude higher probability than any other risk, with the possible exception of AI8.
  3. Rowe & Beard (2018)9 provide a survey of existential risk estimates, and they only find one source (Pamlin & Armstrong) that considers unknown risks (Ord’s book had not been published yet).

Based on these estimates, Pamlin & Armstrong appear to basically agree with the argument in this essay that unknown risks pose a greater threat than all known risks except possibly AI (although I believe they substantially underestimate the absolute probability of existential catastrophe). Ord appears to agree with the weaker claim that unknown risks matter a lot, but not that they matter more than all known risks. But based on Rowe & Beard’s survey (as well as Michael Aird’s database of existential risk estimates), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration. And there exists almost no published research on the issue.

Implications

If unknown risks do pose a greater threat than any known risk, this might substantially alter how we should allocate resources on mitigating existential risk, although it’s not immediately clear what we should change. The most straightforward implication is that we should expend relatively more effort on improving general civilizational robustness, and less on mitigating particular known risks. But this might not matter much because (1) mitigating known risks appears more tractable and (2) the world already severely neglects known x-risks.

The Global Challenges Foundation has produced some content on unknown risks.710 Pamlin & Armstrong7 (p. 129) offer a few high-level ideas about how to mitigate unknown risks:

  1. Smart sensors and surveillance could detect many uncertain risks in the early stages, and allow researchers to grasp what is going on.

  2. Proper risk assessment in domains where uncertain risks are possible could cut down on the risk considerably.

  3. Global coordination would aid risk assessment and mitigation.

  4. Specific research into uncertain and unknown risks would increase our understanding of the risks involved.

On the subject of mitigating unknown risks, Toby Ord writes: “While we cannot directly work on them, they may still be lowered through our broader efforts to create a world that takes its future seriously” (p. 162).

In The Vulnerable World Hypothesis11, Nick Bostrom addresses the possibility that “[s]cientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization.” The paper includes some discussion of policy implications.

If we’re concerned about unknown future risks, we could hold our money in a long-term existential risk fund that will invest the money to be used in future decades or centuries, and deploy it only when the probability of existential catastrophe is deemed sufficiently high.

But note that a higher probability of existential catastrophe doesn’t necessarily mean we should expend more effort on reducing the probability. Yew-Kwang Ng (2016)12 shows that, under certain assumptions, a higher probability of existential catastrophe decreases our willingness to work on reducing it. Much more could be said about this13; I only bring it up as a perspective worth considering.

In general, unknown risks look important and under-researched, but they do not offer any clear prescriptions on how to mitigate them. More work is required to better evaluate the probability of an existential catastrophe due to unknown risks, and to figure out what we can do about it.

Notes

  1. Ord, Toby. The Precipice (p. 162 and footnote 137 (p. 470)). Hachette Books. Kindle Edition. 

  2. Toby Ord on the precipice and humanity’s potential futures. 80,000 Hours Podcast. Relevant quote:

    [I]n the case of nuclear war, for example, nuclear winter hadn’t been thought of until 1982 and 83 and so that’s a case where we had nuclear weapons from 1945 and there was a lot of conversation about how they could cause the end of the world perhaps, but they hadn’t stumbled upon a mechanism that actually really was one that really could pose a threat. But I don’t think it was misguided to think that perhaps it could cause the end of humanity at those early times, even when they hadn’t stumbled across the correct mechanism yet.

  3. Technology appears to be growing hyperbolically. If existential risk is roughly proportional to technology, that means we could rapidly approach a 100% probability of existential catastrophe as technological growth accelerates. 

  4. I have substantial doubts as to whether humanity will achieve a positive future if left to our own devices, but that’s out of scope for this essay. 

  5. I tried to do this, and gave up after I came up with about a dozen categories of existential risk on the level of detail of “extinction via molecules that bind to the molecules in cellular machinery, rendering them useless” or “extinction via global loss of motivation to reproduce”. Clearly we can find many risk categories at this level of detail, but also this gives far too little detail to actually explain the sources of risk or assess their probabilities. A proper taxonomy would require much greater detail, and would probably be intractably large. 

  6. I find it plausible that the Petrov incident had something like an 80% ex ante probability of leading to nuclear war. The anthropic principle means we can’t necessarily treat the non-occurrence of an extinction event as evidence that the probability is low. (I say “can’t necessarily” rather than “can’t” because it is not clear that the anthropic principle is correct.) 

  7. Pamlin, Dennis & Armstrong, Stuart. (2015). 12 Risks that threaten human civilisation: The case for a new risk category.  2 3

  8. Pamlin & Armstrong do not provide a single point of reference for all of their probability estimates, so I have produced one here for readers’ convenience. When providing a probability estimate, they give a point estimate, except for AI, where they provide a range, because “Artificial Intelligence is the global risk where least is known.” For three existential risks, they decline to provide an estimate.

    Extreme climate change 0.005%
    Nuclear war 0.005%
    Global pandemic 0.0001%
    Ecological catastrophe N/A
    Global system collapse N/A
    Asteroid impact 0.00013%
    Super-volcano 0.00003%
    Synthetic biology 0.01%
    Nanotechnology 0.01%
    AI 0-10%
    Unknown consequences 0.1%
    Bad global governance N/A

  9. Rowe, Thomas and Beard, Simon (2018) Probabilities, methodologies and the evidence base in existential risk assessments. Working paper, Centre for the Study of Existential Risk, Cambridge, UK. 

  10. Tzezana, Roey. Unknown risks. 

  11. Bostrom, N. (2019), The Vulnerable World Hypothesis. Glob Policy, 10: 455-476. doi:10.1111/1758-5899.12718 

  12. Ng, Y.‐K. (2016), The Importance of Global Extinction in Climate Change Policy. Glob Policy, 7: 315-322. doi:10.1111/1758-5899.12318 

  13. As one counter-point: In Ng’s model, a fixed investment reduces the probability of extinction by a fixed proportion (that is, it reduces the probability from P to aP for some constant a < 1). But I find it likely that we can reduce existential risk more rapidly than that when it’s higher.

    As a second counter-point, Ng’s model assumes that a one-time investment can permanently reduce existential risk, which also seems questionable to me (although not obviously wrong).