If humans become extinct, wild animal suffering will continue indefinitely on earth (unless all other animals go extinct as well, which is unlikely but possible). Wild animals’ lives are likely not worth living, so this would be bad, but it’s not the worst thing that could happen.

Preventing human extinction obviously means that humans will continue to exist, but this direct effect is trivial compared to the effects described below.

Major reasons why preventing human extinction might be bad:

  • We sustain or worsen wild animal suffering on earth.
  • We colonize other planets and fill them with wild animals whose lives are not worth living.
  • We create lots of computer simulations of extremely unhappy beings.
  • We eventually create an AI with evil values that creates lots of suffering on purpose. (But this seems highly unlikely.)

Major reasons why preventing human extinction might be good:

  • We colonize other planets and fill them with wild animals whose lives are worth living.
  • We successfully create a hedonium shockwave–i.e. we fill the universe with beings experiencing the maximum amount of pleasure that it is possible for beings to experience.
  • Even if we don’t create eudamonia, we fix the problem of wild animal suffering and make most of the beings in the universe very happy.
  • We find other planets with wild animals whose lives are net negative and we make their lives better.

Emotional disclosure: I’m biased toward optimism here because I don’t like the idea of humans becoming extinct, and I definitely don’t like the idea that this could be the best outcome in expectation.

Even people who are pessimistic about wild animal suffering generally assume that preventing human extinction is good, but I do not often see this justified. Let’s consider some arguments for and against.

Arguments on Why Preventing Human Extinction May Be Good or Bad

Bad: Humans make wild animal suffering worse on earth

This scenario doesn’t require making any extreme predictions about the future, but it’s not clear that wild animal suffering (WAS) will be worse with humans than without. If humans go extinct then wild animals will almost certainly continue to suffer in the way they have been for hundreds of millions of years. It’s not conclusive that wild animals’ lives are net negative on balance, but it’s sufficiently likely that this is a major concern.

It’s conceivable that climate change could worsen wild animal suffering; this seems like the most plausible reason why humans might increase WAS if they stay on earth and don’t make any substantial technological advances. But it’s highly uncertain whether climate change will make things better or worse, and either way, humans will probably prevent climate change from going too far.

Bad: Humans colonize other planets and spread wild animal suffering

If humans continue to exist and successfully colonize other planets, they have a high probability of filling the universe with wild animal suffering (directed panspermia). This is good if wild animals’ lives are worth living on balance, and bad if they’re not worth living (the latter seems more likely).

Much of the reasoning below comes from a conversation I had with Kelsey Piper.

It is probably not technologically feasible to travel to other solar systems in less than thousands of years, which we could do much more easily if we were not made of organic matter. If we’re not made of organic matter (e.g. we exist as emulations on computer systems), we have no need for a biosphere filled with wild animals and we are unlikely to spread wild animal suffering. If future humans preserve their current aesthetic values, they may want to create biospheres just because they are aesthetically pleasing. This would use large amounts of resources that could be used for creating more humans instead. I’m uncertain about whether future humans will have sufficiently robust population ethics that they will prefer to use planetary-scale resources to create lots of happy humans (or human emulations, or whatever form humans are in at that point) rather than wild animals. Right now I do not expect that most humans value a population of one trillion happy people as better than a population of one billion happy people.

But Kelsey pointed out that it’s unreasonable to expect that humans would intentionally spend their limited resources on creating lots of wild animals with bad lives. Right now we are quickly burning through natural resources, and while there is a lot of pushback against this process, the dominant forces work in favor of habitat destruction. That will probably continue to be the case.

I expect it’s more likely than not that if we colonize other planets, we will not spread wild animal suffering, even if we don’t care about animals. However, WAS may be bad enough that even if we have only a relatively small probability of spreading it, the expected value of the far future is still negative.

Kelsey is extremely confident that far-future humans will not organize themselves as organic beings that live on planets orbiting stars. I don’t know how to make good predictions about this sort of thing but I believe she is good at reasoning and her confidence makes me update in her direction.

Good: In the far future, humans will care about all sentient beings

We have a few reasons to be optimistic that, if humans do fill other planets with non-human beings, we will make an effort to make these beings happy. Humans have become less violent and more cooperative over time (Steven Pinker has argued this in depth in The Better Angels of Our Nature; Holden discusses it here).

In The Expanding Circle, Peter Singer argues that humans’ circles of compassion have expanded over time. In light of this evidence, it appears likely that concern for wild animal suffering will eventually become widespread as most people’s circle of compassion expands to include sentient animals in the wild. (Although some have disputed Singer’s evidence.)

If we do colonize other planets and we put organic non-human beings on those planets (which may not happen–see above), there’s a good chance that we will care about their welfare and take efforts to ensure that their lives are happy. I put a low prior on the claim that care for wild animals will eventually become widespread, but Singer’s expanding circle and Pinker’s acknowledgment of trends toward increasing cooperation update me in the direction of optimism.

I would guess that future humans are more likely not to care about wild animals than they are to care. It certainly looks from the present like people’s circles of compassion are expanding and will eventually encompass wild animals, but making predictions of this sort is extremely error-prone; this requires assuming that moral sentiments will substantially change for a majority of the population. One could assert that humans today would care about wild animals if their beliefs were in reflective equilibrium; I don’t know if reflective equilibrium is even possible, but if it is, I’d expect that humans who are in it would care about wild animals.

Good: Humans fill the universe with human-like beings

It’s conceivable that far-future humans will devote nearly all their resources to creating humans or human-like beings (e.g. computer emulations of humans). These humans will almost certainly have net positive lives, and probably their lives will be substantially better than the lives of modern humans.

Wild animals suffer a lot, but as suffering-creating machines, they work fairly inefficiently. If we efficiently utilize the available resources of the solar system, we can create much more happiness this way than there exists suffering on earth.

At presents, humans seem more concerned with preventing overpopulation than with creating more humans. In many developed countries, the average desire for children is low enough that populations are slowly shrinking. Even so, if we develop the technology to trivially produce offspring (e.g. if humans exist as computer emulations), which we likely will if we have the technology to spread among the stars, then even a small number of humans with a desire to reproduce can quickly create lots of new happy humans.

Good: Eudamonia dominates calculations

Carl Shulman argues that the positive value of a universe filled with maximally happy beings massively outweighs the negative value of a universe filled with wild animal suffering. He claims (in an abstract way that makes it hard for me to tell if this is what he actually believes) that the probability of a maximally-happy versus a maximally-suffering universe dominates all other considerations about the far future. He claims that it is prima facie probable that the best universe has equal and opposite moral value to the worst universe.

If humans continue to exist, they may fill the universe with wild animal suffering. But they have a small probability of creating universal eudamonia, which would be massively more good than directed panspermia would be bad–perhaps by ten orders of magnitude or more. Therefore, even a small nontrivial probability of eudamonia dominates considerations about the effects of far-future humans.

I find this argument plausible but not very robust. It relies on several assumptions:

  1. Humans have a higher probability of producing a eudaimonia-filled universe than a maximum-suffering-filled universe.
  2. The only two ways humans are nontrivially likely to have an astronomical hedonic effect are through a hedonium shockwave or directed panspermia.
  3. Eudaimonia is massively more good than directed panspermia is bad.
  4. Eudaimonia is sufficiently probable that its expected positive value dominates the expected negative value of increased wild animal suffering.

(1) seems almost certainly true. Even an AI with misaligned values would likely do something orthogonal to human values rather than directly opposed. Eudaimonia might not be very likely, but the mere fact that there are currently humans who think we should do it, and there are (as far as I know) no humans who think we should fill the universe with maximum suffering, gives plausibility to the claim that a transcendently good future is much more likely than a maximally bad one.

I believe (2) is false but not necessary for the argument. If you replace “directed panspermia” with “any way that humans are likely to create lots of suffering,” (2) is no longer an objection. But this has the problem of making the argument more tenuous–what ways are humans likely to create lots of suffering? Are they all substantially less bad than a dolorium shockwave? How likely are they? It’s not implausible that future humans might for some reason run many computer simulations of conscious beings experiencing torturous states, and that these computer simulations would be efficient enough to be within a few orders of magnitude of dolorium in terms of badness.

I have heard people question (3) but it appears fairly obviously true for me. In humans, and probably in (almost?) all sentient animals, possible suffering dominates possible happiness, but this can be explained by the fact that harmful events can be much worse evolutionarily than beneficial events can be good. (The loss of a child is much more evolutionarily damaging than an orgasm is beneficial.) There is little reason to believe that this would be true for all possible minds. The naive assumption is that maximum happiness and maximum suffering are about equal when considering minds optimized for happiness or for suffering, and I see no evidence against this assumption. Further, it’s likely that a mind specifically optimized for happiness would be capable of more happiness than an evolved animal is capable of suffering.

(4) causes the biggest problems. It resembles a Pascal’s mugging, so it is questionable on those grounds. I expect that claim (4) is probably true, but I’m suspicious of arguments about low probabilities of high payoffs.

In conclusion, I am extremely uncertain about whether this argument works.

Bad/Good: We create suffering/happy computer simulations

We could potentially create much more happiness or suffering than exists in wild animals by running large efficient computer simulations. These could be net positive or negative depending on what sorts of simulations dominate.

On this Felicifia thread, Carl Shulman raises the point that most sentient simulations will probably be happy:

Scientific simulations concerned with characteristics of intelligent life and civilizations would disproportionately focus on intelligent life, and influential intelligent life at that, with a higher standard of welfare.

If we adopt the reasonable prior belief that pain and pleasure balance out in minds optimized for feeling pain or pleasure, then if we run simulations of new kinds of minds (beyond animals), we should expect pain and pleasure to balance out. We perhaps have reason to favor pleasure because we seem more likely to intentionally create happy simulations for their own sake but unlikely to create suffering simulations without some other purpose.

Good: Other smart people think it will be good

This is more of a meta-argument. I have talked to a number of well-informed and thoughtful people and they generally agree that the the expected value of the far future is good, and so preventing human extinction is good. People who have told me they believe this include Brian Tomasik (sort of–he thinks this is true for the “median classical hedonistic utilitarian” (his words) but he puts more weight on suffering), Carl Shulman, and Nick Beckstead, all of whom are extremely thoughtful and care a lot about existential risk and the far future. I put a lot of credence in these people’s beliefs. Carl Shulman also has the meta-belief that most people who have thought about this believe that the far future is net positive in expectation, and I put credence in his meta-belief.

Overall this probably represents the strongest argument in favor of the far future being net positive if humanity survives. My major concern here is that we tend to want to believe that humanity is a force for good, so we’re biased toward believing that we should prevent human extinction. Nobody wants to accept that the best thing may be for humanity to get wiped out. But even with this bias in mind, I still afford considerable weight to the common belief that preventing human extinction is good.

The Importance of the Far Future

I have described a few reasons why the far future may be net positive or negative if humanity survives. None of the arguments for the future being net positive look conclusive, but together they feel stronger than the arguments for the future being net negative.

I’d give about a 60% probability that the far future is net positive, and I’m about 70% confident that the expected value of the far future is net positive (potential good future scenarios look more significant in the expected value calculation than potential bad scenarios). This implies that preventing human extinction is extremely important in order to ensure that we can bring about a good future.

I generally buy the common arguments about the importance of shaping the far future, and I will not contribute much to the conversation by repeating them here. My primary problem with these arguments was that they rarely address why the far future is good for non-human animals. However, after writing this essay, I am substantially more confident that reducing extinction risk is good on balance and not just good for humans, and therefore that doing so is overwhelmingly important.

Nick Beckstead has written a dissertation, “On the Overwhelming Importance of Shaping the Far Future”, which he discusses in an interview here. For more in-depth arguments about why the far future matters, Beckstead’s dissertation may be the best source.