"Disappointing Futures" Might Be As Important As Existential Risks
Confidence: Possible.
Summary
- Perhaps the most concerning risk to civilization is that we continue to exist for millennia and nothing particularly bad happens, but that we never come close to achieving our potential—that is, we end up in a “disappointing future.” [More]
- A disappointing future might occur if, for example: we never leave the solar system; wild animal suffering continues; or we never saturate the universe with maximally flourishing beings. [More]
- In comparison to civilization’s potential, a disappointing future would be nearly as bad as an existential catastrophe (and possibly worse).
- We can make several plausible arguments for why disappointing futures might occur. [More]
- According to a survey of quantitative predictions, disappointing futures appear roughly as likely as existential catastrophes. [More]
- Preventing disappointing futures seems less tractable than reducing existential risk, but there are some things we might be able to do. [More]
Cross-posted to the Effective Altruism Forum.
Contents
- Summary
- Contents
- Introduction
- Examples of disappointing futures
- Why we should be concerned about disappointing futures
- Comparing disappointing futures to x-risks
- What might reduce the risk of disappointing futures?
- My subjective probability estimates
- Acknowledgments
- Notes
Introduction
As defined by Toby Ord in The Precipice, “An existential catastrophe is the destruction of humanity’s longterm potential.” Relatedly, a disappointing future is when humans do not go extinct and civilization does not collapse or fall into a dystopia, but civilization1 nonetheless never realizes its potential.
The most salient (although perhaps not the most probable) example of a disappointing future: civilization continues to exist in essentially the same form that it has for the past few hundred years or so. If we extrapolate from civilization’s current trajectory, we might expect the long-run future to have these features:
- The human population size stabilizes at around 10 billion.
- Global poverty ceases to exist, and all humans become wealthy by today’s standards.
- Scientific and societal advances make people somewhat happier, but not transformatively so.
- Humans continue not to care about wild animals’ welfare. Wild animal suffering continues to massively dominate human happiness, such that sentient life as a whole experiences more suffering than happiness.
- We never populate other planets.
Call this the “naively extrapolated future”. We could certainly extrapolate other plausible futures from civilization’s current trajectory—for example, if we extrapolate the expanding circle of moral concern, we might predict that future humans will care much more about animals’ welfare. And if humans become sufficiently technologically powerful, we might decide to end wild animal suffering. I don’t actually believe the naively extrapolated future is the most plausible outcome—more on that later—but I do think if you asked most people what they expect the world to look like a thousand years from now, they’d predict something like it.
A note on definitions: Under the definitions of “existential catastrophe” and “disappointing future”, it’s debatable whether a disappointing future counts as a type of existential catastrophe. But when people talk about interventions to reduce existential risk, they almost always focus on near-term extinction events or global catastrophes. If civilization putters around on earth for a few billion years and then goes extinct when the sun expands, that would qualify as a disappointing future. It might technically count as an existential catastrophe, but it’s not what people usually mean by the term. If we want to reduce the risk of a disappointing future, we might want to focus on things other than typical x-risk interventions. Therefore, it’s useful to treat disappointing futures as distinct from existential catastrophes.
In this essay, I argue that disappointing futures appear comparably important to existential catastrophes. I do not attempt to argue that effective altruists should prioritize disappointing futures. In terms of the scale/neglectedness/tractability framework, this essay focuses primarily on scale (although I will touch on tractability).
The basic argument:
- The importance of a particular future equals its goodness or badness, weighted by its probability of occurring.2
- We might prefer a disappointing future over one where an existential catastrophe occurs, but both massively under-shoot civilization’s potential. Disappointing futures and existential catastrophes look comparably bad in comparison to the best possible outcomes.
- Disappointing futures are approximately as probable as existential catastrophes (to within an order of magnitude).
- Therefore, disappointing futures are about as important as existential catastrophes.
The rest of this essay is structured as follows:
- In “Examples of disappointing futures”, I describe some possible future outcomes, and why they qualify as disappointing.
- In “Why we should be concerned about disappointing futures”, I provide some qualitative arguments for why we might end up in a disappointing future.
- In “Comparing disappointing futures to x-risks”, I compare the probability of a disappointing future to the probability of existential catastrophe, as well as their relative magnitudes.
- In “What might reduce the risk of disappointing futures?”, I briefly examine what we might be able to do to avoid disappointing futures, and how tractable this seems compared to reducing x-risks.
- I conclude with some of my subjective probability estimates on relevant predictions.
The first few parts of this essay can be read somewhat independently. Readers who feel they already understand the concept of disappointing futures can skip past “Examples of disappointing futures”, and start reading at “Why we should be concerned about disappointing futures”. Readers who agree that disappointing futures matter, or who just want to see quantitative predictions, can skip forward to “Comparing disappointing futures to x-risks”.
Examples of disappointing futures
My description of the naively extrapolated future paints a fairly specific picture. In this section, I will describe some disappointing futures that don’t require making as many predictions about how the future will look.
For each of these possible futures, the claim that it would be bad depends on adopting at least one controversial philosophical premise. I will not attempt to fully justify these premises—entire dissertations could be (and indeed have been) written about them. But you only need to accept the premises of one of these possible futures, or of some other future I didn’t list, for my argument to work. Under each example, I will briefly describe the required premises.
This is not meant to be a comprehensive list; it just includes some of the most easily-imaginable disappointing futures.
We never leave the solar system
If we spread throughout the galaxy, we could create many orders of magnitude more happy lives—perhaps on the order of a trillion times more than exist today. If we fail to do this, we reduce the value of the future by a factor of a trillion.
For the sake of illustration, suppose humanity has a 1 in 3 chance of extinction, a 1 in 3 chance of surviving but staying on earth, and a 1 in 3 chance of populating the galaxy. If we have the opportunity to snap our fingers and reduce the chance of one of the two bad outcomes to zero, then we prefer to eliminate the “stay on earth” outcome. And not just slightly prefer it—preventing this outcome is a trillion times better than preventing extinction. If we have the choice between reducing extinction risk to 0 with certainty, or reducing the chance of staying on earth by only 1 in a billion (from 33.3333333% to 33.3333332%), then we should still strongly prefer the latter.3
Why might humans never spread throughout the universe?
We have already spread across almost all (land) regions of the earth. Given sufficient time, humans tend to populate habitable areas. However:
- Humans spreading across the planet coincided with increasing population, but human population appears to be stabilizing, which reduces the value of new land.
- Interstellar travel involves significant hurdles that we have never faced before. Other planets are uninhabitable, and we would have to invest massive efforts into terraforming them. Before colonizing other planets, one would expect humans to colonize Antarctica or the bottom of the ocean (or Nebraska for that matter), and people have not shown much interest in those.
Tyler Cowen spoke on the 80,000 Hours Podcast on why he expects humanity not to populate other planets:4
I think space is overrated.
It’s far, there are severe physical strains you’re subject to while you’re being transported, communication back and forth takes a very long time under plausible scenarios limited by the speed of light. And what’s really out there? Maybe there are exoplanets, but when you have to construct atmosphere, there’s a risk diversification argument for doing it.
But simply being under the ocean or high up in the sky or distant corners of the earth, we’re not about to run out of space or anything close to it. So I don’t really see what’s the economic reason to have something completely external, say, to the solar system.
The idea that somehow we’re going to be sitting here three million years from now, and I’ll have my galaxy and you’ll have yours, and we’re not even human anymore. It’s not even recognizable as something from science fiction. I would bet against that if we could arrange bets on it.
I would take the other side of that bet, conditional on civilization surviving long enough. But Cowen’s prediction is not unreasonable.
Required philosophical premises
The badness of this scenario depends on something like the total view of population ethics with linear aggregation:
- It is good to create happy people, and bad not to.
- All else equal, each additional happy person matters as much as the last one.
Other views might claim that something can only be good if it is good for someone (the person-affecting view) or that additional people have diminishing marginal value. (For a review of the most important views in population ethics, see Greaves (2017), Population Axiology.)
Under these other views, it is okay not to create astronomical numbers of people in the far future, so it does not much matter if civilization never leaves earth.
However, to believe that existential risk matters overwhelmingly more than short-term causes, you need to adopt (something like) the total view with linear aggregation. Under non-total views, an existential catastrophe is still bad. But most justifications for prioritizing existential risk reduction focus on the overwhelming importance of the far future, which entails the total view with linear aggregation.5 For more, see Beckstead (2013), On the Overwhelming Importance of Shaping the Far Future, sections 4 and 5.
If we reject either of these premises, we must also reject the overwhelming importance of shaping the far future.
Massive nonhuman animal suffering continues
Even if humans colonize the galaxy, this could be bad if we bring wild animals along with us. There are many more wild animals than humans, and especially so for small animals like insects and nematodes. Depending on assumptions about how consciousness works and on how happiness and suffering trade off, it seems likely that wild animal suffering outweighs human happiness. A galaxy filled with humans and wild animals might not merely be disappointing—it could be worse than extinction.
Most people value nature, and many see a moral duty to preserve it. We might expect future humans to bring nature with them to other planets. But over the last century, human populations have increased much more rapidly than other animals’, with the exception of farmed animals; if this trend continues, the balance of welfare might tip toward positive. Even so, a future universe filled with animal suffering would contain much less net welfare than it should, thus qualifying as a disappointing future.6 (Although a net positive future with small wild animal populations still seems less disappointing than a future where we never leave earth at all.)
Required philosophical premises
This scenario is only bad if you assign non-trivial moral value to non-human animals. for more on this, see Schukraft (2020), How to Measure Capacity for Welfare and Moral Status. The degree of badness of this scenario depends on how bad you believe wild animals’ lives are.
Its badness does not depend on any controversial assumptions about population ethics, because while not all widely-held views agree that it is good to create happy beings, they do agree that it is bad to create beings who live miserable lives. In fact, under non-total views of population ethics, wild animal suffering could matter more than under the total view, because creating unhappy wild animals cannot be outweighed by creating happy humans.
We never saturate the universe with maximally flourishing beings
If we accept something like the total view of population ethics with linear aggregation, it follows that we should enrich the universe with as much goodness as possible. That means creating maximum pleasure, or eudaimonia, or whatever it is we consider valuable.
The current structure of the world uses almost all available energy on things that aren’t valuable in themselves, like heating and transportation. We could use that energy to make more happy people instead. And the beings capable of the greatest flourishing probably don’t look much like humans. By retaining physical human bodies, we would lose out on most of the possible good we could create. (David Pearce writes of the hedonistic imperative to create beings capable of much greater heights of flourishing than we can imagine.) And if we use our resources efficiently, we could create astronomically more flourishing beings than exist today.7
This vision does not depend on specific assumptions about what “flourishing” looks like. It could fit the hedonistic utilitarian idea of hedonium—matter that’s organized to efficiently produce simple but maximally happy beings, who have no functions other to experience happiness—but it could also look like something else. For almost any conception of what constitutes the good, surely we can find better ways of expressing it than with human bodies living in large, mostly-empty spaces.
But humans tend to like their bodies. Many people want to see the human species continue. Some people have proposed that we leave earth relatively untouched, while filling the rest of the universe with hedonium, achieving a good compromise across multiple value systems. If we don’t do this, and simply fill the universe with a human (or post-human) civilization that uses energy about as inefficiently as we use it today, this would represent a major missed opportunity.
Required philosophical premises
Like “We never leave earth”, the badness of this scenario presupposes the total view of population ethics.
This scenario does not depend on any particular assumption about what constitutes happiness/flourishing. The only necessary assumption is that the best possible future is far better than a reasonably good one, even one where humans populate the accessible universe.
Why we should be concerned about disappointing futures
In the past, when I’ve talked to friends about the possibility of disappointing futures, especially friends who prioritize existential risk, they’ve tended to dismiss it as too unlikely. For example, they argue that wild animal suffering will not continue because technology will advance to the point that we have the power to end wild animal suffering. And even if most people don’t care about it, all we need is that a few people care enough to make it happen. Alternatively, I have heard people argue that civilization will stop consisting of biological beings (perhaps because humans upload their brains) and wild animals will cease to exist.
I believe some people are too confident in these arguments.
I find them plausible, at least. But I can also see plausible arguments that we shouldn’t worry about existential risks. The case for prioritizing x-risk reduction does not depend on a strong conviction that an existential catastrophe will occur otherwise. It only requires that an existential catastrophe appears to be a reasonable possibility (and might not even require that, if you’re willing to take bets with high apparent EV but tiny probabilities of success).
Similarly, the case for caring about disappointing futures does not require that they appear likely, only that they appear to be a reasonable possibility.
My best guess is that marginal work on x-risk reduction does more expected good than marginal work on disappointing futures. (More on this later.) But I do believe disappointing futures matter on roughly the same scale as x-risks (to within an order of magnitude), and have received almost no discussion. So they represent an under-investigated opportunity.
Below, I present a series of arguments for why we might expect the future to be disappointing. I find all of them plausible but not strongly compelling. I can also come up with many arguments to the contrary, on why disappointing futures might not occur. My intention is to show that we cannot easily dismiss the possibility, so I will only present arguments in favor.
Argument from the difficulty of mutually beneficial trades
People’s lives have generally gotten better over time, so it seems natural to predict that eventually we will solve all major problems and ensure everyone lives a fulfilling life. This prediction, while plausibly true, is certainly not obviously true.
So far, civilization has improved the lives of most participants because it has allowed people to make mutually beneficial trades. But it has utterly failed to improve the lives of non-participants, most notably non-existent potential humans and non-human animals. In the case of animals, civilization has in fact made their lives much worse, to an extent that perhaps outweighs all the good civilization has done for humans (at least so far).89 And while humans’ lives have tended to get better across generations, this has happened more or less accidentally due to economic and technological development—past generations did not focus on growing the economy so they could provide value future generations. And this has only benefited existing people, not nonexistent potential people. While many people want to have children, almost nobody intrinsically cares about bringing new people into existence irrespective of kinship.
Life has gotten better (for humans) through two mechanisms:
- The economy allows people to gain from helping others (e.g., by developing new technology and selling it). Increasing globalization makes this process easier.
- Most people have a strong evolutionary desire to reproduce, so civilization continues across generations.
At a bird’s-eye view, people in society act in their own self-interest (hence the common economic concept of homo economicus). In order to avoid one of the disappointing futures described above, this must fundamentally change. To end wild animal suffering, or to create increasingly many happy people, or to create non-human sentient beings, actors must do things that provide no personal benefit. People do behave selflessly all the time, but (1) not frequently enough to be visible at the civilizational level, and (2) people almost exclusively behave altruistically toward those with whom they have a personal connection. (For example, charitable donations clearly do help people; but improvements in standard of living over the past several hundred years have come almost entirely from people behaving self-interestedly in ways that had positive externalities, not from charity.)
In theory, we might achieve much larger human populations simply due to people’s evolutionary motivations to spread their genes as widely as possible. But empirically, this doesn’t seem to happen—people in wealthier societies tend to have fewer children.
If civilization will someday spend substantial resources on helping non-participants such as wild animals and non-existent future people, this seems to require a fundamental change in how people behave. Such a change could occur, but a hypothesis that posits a major deviation in human behavior should be viewed with suspicion.
The expanding circle
As Peter Singer and others have observed, over time, civilization has tended to widen its circle of moral concern. If we extrapolate the expanding circle, we might expect society’s concern to extend to all morally relevant beings. If so, society will probably aim to create the best possible world for all beings, averting a disappointing future. But we have some reason to doubt that this trend exists.
What is the probability that civilization will eventually come to incorporate the interests of all morally relevant beings? This is a complex sociological question that I am not qualified to answer. My weakly-held impression is that historical instances of moral circle expansion generally only occurred when the people in power could make beneficial trades with the previously-disenfranchised individuals or with those individuals’ kin. In any case, I do not believe we can confidently claim that future society will properly recognize the interests of all morally relevant beings.
More writings on this topic:
- Aaron Gertler, “The Narrowing Circle (Gwern)”
- Grue_Slinky, “The Moral Circle is not a Circle”
- Michael Aird, “Moral circles: Degrees, dimensions, visuals”
Why might civilization eventually benefit non-participants?
I have heard a few specific predictions on why we shouldn’t worry about moral circle expansion:
- We will develop an AI that runs something like Coherent Extrapolated Volition, and it will figure out that we should care about wild animals, future humans, and any other beings that might matter.
- A small number of altruists will single-handedly (few-handedly?) do what’s necessary to help disenfranchised groups.
- There won’t be any biological beings in the far future, so there will be no more wild animal suffering.
(The third prediction only matters for wild animal suffering, not for other causes of disappointing futures.)
I won’t address any of these in detail. But all of these predictions depend on fairly specific prerequisites. For example, #2 requires that these altruists have sufficient resources to substantially impact the world, and that their goals will not conflict with anyone else’s. If altruists decide to, say, eliminate predation, then they might face resistance from people who aesthetically value nature. If altruists want to build Dyson spheres to power tiny maximally-happy brains, that requires taking up lots of resources that other actors might want.10
Be wary of predictions that have specific prerequisites. Do not dismiss them entirely, but pay attention to their prerequisites.
Argument from the rarity of values realization
Even when people hold certain values, they typically fail to live up to those values except when it’s expedient. Some examples:
- In Peter Singer’s “drowning child” thought experiment, almost everyone agrees that we have a moral obligation to save the child, and that physical distance is not morally relevant. But almost nobody donates meaningful amounts of money to global poverty alleviation, even though doing so would not require any personal hardship for a large percentage of the population.
- A 2015 Gallup poll found that 32% believe animals should have the same rights as people, even though only 5% are vegetarian.11 (Surveys like this often suffer from social desirability bias and other issues. Nonetheless, 32% vs. 5% seems like such a large difference that it probably overwhelms any methodological flaws.)
- Famously, several of America’s founding fathers (most notably Thomas Jefferson) opposed slavery even though they owned slaves. Slavery did not end in the developed world until industrialization rendered the practice largely unprofitable.
In Utilitarianism (1863), John Stuart Mill made a similar observation:
All social inequalities which have ceased to be considered expedient, assume the character not of simple inexpediency, but of injustice…The entire history of social improvement has been a series of transitions, by which one custom or institution after another, from being a supposed primary necessity of social existence, has passed into the rank of universally stigmatised injustice and tyranny. So it has been with the distinctions of slaves and freemen, nobles and serfs, patricians and plebeians; and so it will be, and in part already is, with the aristocracies of colour, race, and sex.
Mill paints an optimistic picture of a historical trend of improvements in social equality. But this trend also suggests some reason for pessimism: if we expect a particular injustice to remain expedient, then it seems likely to continue indefinitely.
It seems highly likely that, given continuing technological and economic progress, global poverty will continue to shrink, and society will eventually replace factory farming with more humane methods of producing food. But what about the inequality between existent and non-existent beings? It seems likely that bringing happy beings into existence (outside of one’s own offspring or relatives) will never become expedient.12
Argument from the impossibility of reflective equilibrium
Reflective-equilibrium views aim to resolve ethical disagreements into a self-consistent whole that society can accept. One example is Eliezer Yudkowsky’s coherent extrapolated volition (CEV), which would involve finding convergent points of agreement among humans who know more, are more connected, are more the people they wish they were, and so on.
A main problem with this approach is that the outcome is not unique. It’s sensitive to initial inputs, perhaps heavily so. How do we decide what forms of “extrapolation” are legitimate? Every new experience changes your brain in some way or other. Which ones are allowed? Presumably reading moral-philosophy arguments is sanctioned, but giving yourself brain damage is not? How about taking psychedelic mushrooms? How about having your emotions swayed by poignant music or sweeping sermons? What if the order of presentation matters? After all, there might be primacy and recency effects. Would we try all possible permutations of the material so that your neural connections could form in different ways? What happens when – as seems to me almost inevitable – the results come out differently at the end?
There’s no unique way to idealize your moral views. There are tons of possible ways, and which one you choose is ultimately arbitrary. (Indeed, for any object-level moral view X, one possible idealization procedure is “update to view X”.) Of course, you might have meta-views over idealization procedures, and then you could find idealized idealized views (i.e., idealized views relative to idealized idealization procedures). But as you can see, we end up with infinite regress here.
If civilization undertakes a Long Reflection, it might fail to reach reflective equilibrium, or it might land on a bad set of terminal values.
A counter-argument: if we do not believe civilization will identify correct values after a Long Reflection, why should we expect our current values to be correct? Determining whether we should care about this argument essentially requires solving meta-ethics. It is extremely unclear to me how much credence I should put on the outcome of a Long Reflection if it disagrees with my current values. As a result, I consider this argument (the impossibility of reflective equilibrium) fairly weak, but hard to definitively dismiss.
Argument from technological hurdles
I alluded to this argument previously: humans might not leave earth because doing so requires overcoming significant technological hurdles without much obvious advantage.
In the past, when humans migrated to new areas, they were largely motivated by economic opportunity. Other planets do not appear to offer any such opportunities, or at least not as directly. And while migrating on earth is not easy, migrating to another planet is far more difficult.
Right now we don’t have the technology to populate other planets, and we can only loosely speculate as to how we might develop it in the future. This holds especially true for interstellar travel. We might someday become sufficiently economically and technologically prosperous that interstellar colonization becomes easy, but then again, we might not.
Interstellar travel, terraforming, and other exceptionally difficult problems might only be solvable by a large unified Manhattan Project-style effort. In the past, people have mustered these sorts of efforts during wartime, or in bids to achieve global dominance, or possibly due to other idiosyncratic motivations that might not repeat themselves in the future. Interstellar colonization might never happen without a major concerted push for it.
Another possibility: interstellar colonization might simply be impossible no matter how advanced our technology becomes. This seems implausible, but if true, it would mean we should not prioritize trying to make it happen, because we can’t do anything about it anyway.
For more, see Mike Smith on the difficulty of interstellar travel for humans.
Suppose the ideal future contains beings that experience much greater heights of well-being than humans do. Creating such beings might require substantial technological advances that won’t occur without concerted effort. If civilization broadly agrees that this is a desirable goal, then they can probably make it happen. But if we depend on a small number of altruists to create most of the value of the future (a possibility that Paul Christiano explores in “Why might the future be good?”), then they might not have sufficient resources to achieve this goal.
Argument from natural selection
People who use their resources for altruistic purposes might get out-competed by self-interested actors. Or evolution might otherwise select for undesirable properties that make it much less likely that we will achieve a maximally good future. This could happen either with ordinary biological humans or with post-humans.
Undesirable natural selection could lead to catastrophically bad futures, such as if beings who live happy lives get out-competed by those who experience substantial suffering. Or it could lead to merely disappointing outcomes, such as if selection pressures against altruism make people less willing to create new happy beings.
The details of how natural selection might operate in the future can get fairly technical. I will not discuss them in detail. Nick Bostrom’s The Future of Human Evolution provides a more detailed perspective.
Argument from status quo
Certain predictions that effective altruists commonly make—such as that we end predation, or that we upload all humans’ brains onto computers, or that a superintelligent AI gains control of the world—would sound very strange to most people (and I am wary of unusual predictions that lots of smart people believe13). These predictions require something more than naively extrapolating from the status quo.
We should assign significant probability to the hypothesis that the world of the distant future will look basically the same as the world today, at least in the ways that count.
In general, perhaps due to the base rate fallacy, people tend to underestimate the probability that things don’t change. Bryan Caplan has famously won 20 out of 20 bets, mostly by betting on the status quo (see this reddit thread for relevant discussion).
Perhaps we shouldn’t extrapolate from “things usually don’t change over timescales of several years” to “things don’t change over centuries.” Clearly things change dramatically over centuries. But we should use caution when making predictions about unprecedented events, such as humanity colonizing other planets, or creating non-biological beings. We should not assign too low a probability to the naively extrapolated future (as described in the introduction).
Furthermore, for most of human civilization, change has occurred extremely slowly. (This is even more true if you look at the history of life on earth.) The rapid change over the last few centuries represents a big deviation from this trend. We might reasonably predict a reversion to the status quo of stability across centuries.
Comparing disappointing futures to x-risks
We have some good reasons to expect civilization to achieve its potential. But as argued in the previous section, disappointing futures do not seem particularly unlikely. How likely are disappointing futures compared to existential catastrophes?
Most people who prioritize x-risk don’t believe an existential catastrophe is overwhelmingly likely to occur. They use probabilistic reasoning: if this happened, it would be extremely bad; it doesn’t seem overwhelmingly unlikely; therefore, it’s important to work on. (I’m leaving out some steps in the argument, but that’s the basic idea.) The same reasoning applies to disappointing futures.
The main argument I have heard against disappointing futures is that they probably won’t happen. But this argument also defeats x-risk as a top priority.
That said, one could give two other arguments against prioritizing disappointing futures that don’t apply to x-risk:
- A disappointing future is much less likely than an existential catastrophe.
- Disappointing futures are harder to prevent.
I will discuss the second argument in its own section. Let’s dive into the first argument by thinking about the probabilities of a disappointing future or existential catastrophe.
Probability of an existential catastrophe
Michael Aird’s database of existential risk estimates includes three estimates of the probability of existential catastrophe over timescales longer than a century:
- Leslie (1996)14: “The probability of the human race avoiding extinction for the next five centuries is encouragingly high, perhaps as high as 70 percent.”
- Bostrom (2002)15: “My subjective opinion is that setting [the] probability [that an existential disaster will do us in] lower than 25% would be misguided, and the best estimate may be considerably higher.”
- Ord (2020)16 (p. 169): “If forced to guess, I’d say there is something like a one in two chance that humanity avoids every existential catastrophe and eventually fulfills its potential: achieving something close to the best future open to us.”
A note on Ord’s estimate, because he provides the most detail: it is apparent from the context of this quote that either he believes disappointing futures have negligible probability, or that they count as a form of existential catastrophe.17
These estimates put the total probability of existential catastrophe somewhere around 25-50%, and they all agree with each other to within a factor of two. But we only have three estimates, so we should not anchor too strongly to this range.
Arguably, we care about the probabilities of specific x-risks and specific disappointing futures more than we care about the aggregate probability, because we might be able to prevent specific futures (e.g., existential catastrophe due to AI) more effectively than we can prevent an entire class of futures (e.g., existential catastrophe in general). Predictions on specific x-risks over the long term appear even sparser than on x-risk in general. I only know of one directly relevant estimate: Grace et al. (2017)18 surveyed AI experts and found a median 5% probability that AI results in an “extremely bad (e.g human extinction)” outcome.
Probability of a disappointing future
I do not believe we can reasonably put the probability of a disappointing future at less than about 5%, which means the probabilities of disappointing future and existential catastrophe fall within an order of magnitude of each other. (I would put my subjective probability of a disappointing future at much higher than 5%, although I do not have high confidence in my estimate.) Even if x-risks matter more than disappointing futures, they do not matter overwhelmingly more.
What do other people believe about disappointing futures?
Brian Tomasik provides a list of probability estimates on big questions about the future. Pablo Stafforini gave his probabilities on the same statements. Three years ago, I gave my own probabilities on the comments under Stafforini’s post, although today I do not entirely agree with the answers I gave. I have reproduced some of the most relevant statements in the table below. Another individual created a Google Doc to allow more people to provide their own estimates, which a few people did; but nobody provided estimates on all of the statements below, so I did not include them.
Brian | Pablo | Michael | |
---|---|---|---|
Human-inspired colonization of space will cause net suffering if it happens | 72% | 1% | 40% |
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments) | 50% | 10% | 20% |
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing | 15% | 8% | 50% |
Humans will go extinct within millions of years for some reason other than AGI | 5% | 10% | 75% |
Regarding their credibility: Brian Tomasik has written many excellent essays on relevant issues. Pablo Stafforini has a strong record on measurable predictions—as of this writing, he holds rank 16 on Metaculus out of hundreds (thousands?) of active users. I would consider them both far more credible on these predictions than most people.
To my knowledge, those are all the public predictions on disappointing futures that have ever been made. I wanted more predictions, so I conducted a survey on the Effective Altruism Polls Facebook group. The survey received 12 responses—not a large sample, but much better than nothing.
The survey asked the following questions:
- What is the % probability that humanity achieves its long-term potential?
- What is the % probability that human civilization continues to exist into the long-term future, does not fall into a dystopia, but never achieves its potential?
- What is the % probability that the world in the long-term future is better than the world today, but less than 1000 times better?
I asked a fourth, optional question:
- What does “humanity achieves its long-term potential” mean to you?
The responses to this fourth question suggested that most participants roughly share the views laid out in this essay as to how good humanity’s long-term potential can be.19
The second and third questions are meant to give two different interpretations of what a disappointing future means. The second question is less precise but more relevant, while the third question is less relevant but more precise.
This table summarizes the responses to the three main questions:
Q1 | Q2 | Q3 | |
---|---|---|---|
mean | 19% | 39% | 41% |
median | 8% | 40% | 40% |
standard deviation | 23% | 35% | 39% |
interquartile range | 23% | 62% | 71% |
Some key observations:
- Q1 and Q2 together imply a mean 42% probability of existential catastrophe, which is close to the mean estimated probability of a disappointing future.
- Most people did not assign a particularly low probability to disappointing futures.
- People’s estimates vary widely, as shown by the large standard deviation and interquartile range. On every question, at least one person answered <= 0.1%, and at least one other answered >= 90%.
- Most people gave similar probabilities on their answers to Q2 and Q3. But some people’s estimates differed substantially between the two (in both directions). This suggests people interpreted the questions in varying ways.
The mean and median estimates, as well as the high variance in answers, indicate that we should be concerned about the impact of disappointing futures on the expected value of the future.
However, this survey has some important limitations:
- It only received a small number of responses.
- It samples from the Effective Altruism Facebook group, and this sample could be biased in various ways. But I expect that this is a much better sample than one could get from something like Mechanical Turk, because the respondents are unusually likely to have thought about longtermism, existential risk, and humanity’s potential.
- The survey questions leave substantial room for interpretation, so we don’t know exactly what people meant by their responses.
What might reduce the risk of disappointing futures?
Suppose we agree that disappointing futures appear likely—possibly even more likely than existential catastrophes. What can we do about it?
-
Some have proposed that we can increase the probability of positive outcomes via values spreading / moral circle expansion. People who argue for prioritizing values spreading have sometimes relied on the assumption that the long-term future is likely to have negative expected value. But this assumption is not necessary. We only require that the ideal future is much more desirable than the “average” future, and that values spreading can meaningfully increase the probability of achieving this ideal. (Jacy Reese discusses the tractability of this solution.)
-
We could structure incentives such that people properly account for the interests of beings who have no direct representation. Longtermist Institutional Reform by MacAskill & John (2020) offer some proposals on how to encourage institutions to make decisions that benefit future citizens. These proposals are preliminary and have some substantial limitations. And ensuring the interests of non-citizens (including animals, beings who would only exist if population increased, simulated minds, etc.) seems even more difficult. But the basic concept does appear promising. See also Dullaghan (2019), Deliberation May Improve Decision-Making.
-
Reducing global catastrophic risk might be the most effective way to decrease the probability of a disappointing future. A global catastrophe that destabilizes civilization without permanently destroying it might damage civilization’s propensity toward altruism. Nick Beckstead discusses this possibility in “The long-term significance of reducing global catastrophic risks”.
-
If altruistic actors build the first superintelligent AI, this could increase the chances that the AI gives proper consideration to all morally significant beings.
-
Some have argued that the best way to improve the long-term future is to increase economic growth (see Tyler Cowen in an interview with 80,000 Hours). This seems unlikely to help us avert a disappointing future—on the margin, economic growth probably cannot (e.g.) provide sufficient resources/motivation for civilization to colonize space if it wasn’t going to happen anyway. Marginal economic growth can probably only make things happen sooner, not switch them from not-happening to happening. But it’s at least conceivable that increasing economic growth could reduce the risk of a disappointing future.
-
The best way to reduce the risk of disappointing futures might be to save money. Longtermist altruists can behave more patiently than other actors. If altruists can preserve their assets over a long enough time horizon (and if non-altruists prefer not to do this), then the altruists will slowly gain power. Eventually, they can use that power to bring about an ideal world. If this plan works, it successfully avoids concerns about the impossibility of mutually beneficial trades, because only altruists will inherit the savings of the previous generation’s altruists. But this requires ensuring that the money stays in altruistic hands.
These ideas all pose significant challenges. This list only represents a first look at some possible solutions.
Tractability
Existential risks and disappointing futures appear to have similar scale. The choice of which to prioritize depends primarily on tractability. Each of the possible solutions in the previous section have their own tractability issues. We cannot say much about them in general—we need to discuss each potential solution individually. So, while much could be said about each of them, I will leave such discussion out of this essay. In the previous section, I included references to deeper analyses of each solution, some of which address tractability.
I do want to raise one broad consideration on tractability. If future people do not achieve an astronomically good outcome, that means the future people who want to achieve this are unable to. If they can’t set civilization on a good trajectory, why should we expect to succeed?
This seems to me like a strong argument, and pushes in favor of investing to give later as the best solution. If we expect future people to have more power to avert a disappointing future, then we should give our resources to them.
But I have three counter-arguments.
- Civilization might have inertia: as it grows, efforts to change its trajectory become more difficult. In that case, earlier efforts can expect to have more success, and we should act now. (However, this only applies if inertia grows more quickly than the investment rate of return. Otherwise, future beings will still be able to do more with our invested money than we can today.20)
- We might live at a time of unusually high leverage. People often claim this with respect to x-risk: we face an unusually high risk of extinction, and if we can get through this critical period, we should expect to survive for millennia. Something similar could be true with respect to avoiding disappointing futures, perhaps if we expect values to “lock in” at some point, or for some other reason.
- Existential risk could increase over time or remain relatively high for a long time. And perhaps deliberate reductions in x-risk only persist for a few decades or centuries. So this argument against working on disappointing futures also makes working on x-risk look less valuable. Therefore, it does not make disappointing futures look less tractable relative to x-risk.
My subjective probability estimates
Making long-term predictions is hard. I do not expect that I can make reliable predictions about the long-term future of civilization. Still, for the sake of transparency, I will provide my rough probability estimates for the main outcomes discussed in this essay.
A disappointing future is more likely than an existential catastrophe: 2 in 5
Marginal work now to reduce the probability of a disappointing future does more good than marginal work now on existential risk: 1 in 3
Working to determine whether we should prioritize disappointing futures has better marginal expected value than working directly on existential risk: 1 in 321
The effective altruism community under-appreciates the significance of disappointing futures: 2 in 3
On long-term outcomes:
- A “fast” existential catastrophe occurs: 1 in 2
- Civilization lasts a long time, but is net negative or only weakly net positive: 1 in 3
- Civilization does something kinda good, but not nearly as good as it could be: 1 in 6
- Civilization achieves its potential: 1 in 50
(These add up to more than 100% because I don’t want to give too much precision in my estimates.)
Acknowledgments
Thanks to David Moss and Jason Schukraft for providing feedback on drafts of this essay.
Notes
-
I prefer to speak about existential risks with respect to civilization rather than humanity, for three reasons:
- We should care about all sentient life, not just humans. Use of the word “humanity” seems to implicate that only humanity matters, in a way that “civilization” doesn’t. (I do not mean to suggest that people who use the word “humanity” only care about humans, because I know that is often not the case.)
- Civilization is the thing that has potential in the relevant sense, not the human species.
- Future civilization might not consist (solely) of humans.
-
I will not attempt to justify this premise, but it is generally accepted by longtermists. ↩
-
This assumes that decreasing the probability of one outcome increases the probability of the following outcome. So decreasing the probability of extinction increases the probability that we survive but stay on earth, without changing the probability that we populate the galaxy. ↩
-
Lightly edited to condense. ↩
-
Some other views might still endorse the overwhelming importance of the far future. For example, a form of utilitarianism where the total utility of the world is the square root of the sums of each individual’s utilities. But the total view with linear aggregation is the most notable such view. ↩
-
In a future where we successfully implement Dyson spheres of hedonium but still have wild animals for some reason, the animals’ suffering would be relatively minor compared to the hedonium, so I wouldn’t consider this a disappointing future. ↩
-
In Astronomical Waste, Nick Bostrom estimates that the reachable universe can concurrently support about 10^38 humans. If the earth can support up to 10^15 humans at a time (which seems like a liberal upper estimate), then failing to colonize the universe inhibits our potential by a factor of 10^23. ↩
-
This depends on philosophical issues around how to trade between humans’ and animals’ welfare, but that’s beyond the scope of this essay. ↩
-
At least, this is true if we look at the animals that directly interact with civilization, most of whom live in bad conditions on factory farms. Civilization might have reduced wild animal suffering on balance, but if so, only by accident. ↩
-
Carl Shulman argues that “[s]preading happiness to the stars seems little harder than just spreading.” But even if (say) 50% of resources are taken by Eudaimonians and the other 50% are taken by Locusts, then shifting Eudaimonian control to 50% + epsilon still has enormous value. ↩
-
Additionally, according to a survey by Sentience Institute (2017), 49% of US adults support a ban on factory farming and 33% support a ban on animal farming. This and the Gallup survey cover different populations and use different methodologies, so they aren’t directly comparable. ↩
-
Over long time horizons, natural selection might favor (post-)humans who try to have as many children as possible. This might be a mechanism by which long-term populations increase greatly, although it doesn’t necessarily mean those future populations will be close to as happy as possible, or that they will efficiently convert energy into well-being. ↩
-
The “that lots of smart people believe” clause doesn’t make the predictions less likely. The point is that lots of smart people believing something isn’t necessarily good evidence that it’s true, particularly with regard to predictions about complicated issues. ↩
-
Leslie, J. (2002). The End of the World: the science and ethics of human extinction. Routledge. ↩
-
Bostrom, N. (2002). Existential Risks. ↩
-
Ord, T. (2020). The Precipice. Hachette Books. Kindle Edition. ↩
-
In the context surrounding this quote, Ord writes that he belives the probability of existential catastrophe in the next century is 1/3 the total probability. In chapter 6, he assigns a 1 in 6 chance to existential catastrophe this century, implying a 50% total probability of existential catastrophe. Thus, he believes the only two possibilities are existential catastrophe or humanity realizes its potential (which could be true by definition, if you define an existential catastrophe as merely failing to realize humanity’s potential (as opposed to defining it as a specific event that permanently prevents humanity’s potential from being realized)).
Additionally, in chapter 6, he categorizes all existential risks and lists his probability estimates. Disappointing futures can only fit under the “Other anthropogenic risks” category, which he estimates at a 1 in 50 chance in the next century. Multiplying this by 3 gives a 6% total chance.
However, this arithmetic doesn’t really make sense, because a disappointing future can’t occur in the next century. A disappointing future is the non-occurrence of humanity realizing its potential, so it doesn’t happen at any particular time. Ord does not specify his long-run probability estimates of extinction from various causes, so as an upper bound, he might believe that 100% of the existential risk after the 21st century comes from disappointing futures. ↩
-
Grace, K., et al. (2017). When Will AI Exceed Human Performance? Evidence from AI Experts. ↩
-
For example, some of them mentioned space-faring civilization, or the possibility of creating beings capable of much greater well-being than humans. ↩
-
This is analogous to how, in the Ramsey equation, the social discount rate accounts for the rate of consumption growth. Future consumption is less valuable in proportion to how wealthy people are, much like how future efforts on trajectory change are less valuable in proportion to how much inertia civilization has.
Technically, we should discount future spending on trajectory changes at the inertia growth rate plus the philanthropic discount rate. ↩
-
I would give a higher probability, but I am mildly pessimistic regarding the ability of meta-level research to solve fundamental problems in cause prioritization. If I thought cause prioritization were more tractable, I would consider it by far the most important thing to work on, given how much uncertainty we currently have. (When I speak of cause prioritization, that includes upstream issues like population ethics and decision theory.) ↩