Summary: All expected value distributions must either (1) have a mean of infinity, or (2) have such thin tails that you cannot ever reasonably expect to see extreme values. When we’re estimating the utility distribution of an intervention, both of these options are bad.

Last edited 2018-06-15 to make title more descriptive.

Note: For this essay I will assume that actions cannot produce negative utility. If you start letting actions have negative value, the reasoning gets way more complicated. Maybe that’s an essay for another time.

Infinite ethics make everything bad

The possibility of infinitely good actions poses a substantial challenge to ethics. In short, if an action has nonzero probability of producing infinite value, then it has infinite expected value. Once actions have infinite expected value, we can’t rank them or even reason about them in any meaningful way. (Nick Bostrom discusses this and other problems in his paper, Infinite Ethics.)

There exists a very real possibility that our universe is infinitely large, which would pose serious problems for ethical decisions. And, less likely but still not impossible, some actions we take could have some probability producing infinite utility. I can get myself to sleep at night by hand-waving this away, saying that I’ll just assume the universe is finite and all our actions have zero probability of creating infinite value. I don’t like this “solution”, but at least it means I can ignore the problem without feeling too bad about it.

But there’s another problem that I don’t think I can ignore.

Expected value distributions make everything much worse

You’re considering taking some action (such as donating to charity). You have (explicitly or implicitly) some probability distribution over the utility of that action. Either your distribution has infinite expected value, or it has finite expected value.

If your utility distribution has infinite expected value, that’s bad. Infinite expected values ruin everything.

If your utility distribution has finite expected value, that means according to your distribution, the probability of the action producing X utility must be strictly less than 1/X. If the action followed a Pareto distribution with parameter alpha = 1, that would mean the probability of producing X utility would be exactly 1/X, and such a distribution has infinite expected value. So any finite-expected-value distribution must have a thinner tail than a Pareto distribution with alpha=1.

(If you’re unfamiliar, you don’t need to worry about what it means for a Pareto distribution to have alpha=1. Suffice it to say that Pareto distributions may have fatter or thinner tails, and the value of alpha determines the thinness of the tail (larger alpha means thinner tail). alpha=1 is the point at which the expected value of the distribution becomes infinite.)

When you believe that an action’s utility distribution has thinner tails than an alpha=1 Pareto distribution, you run into the Charity Doomsday Argument. If you discover some evidence that the action has much higher expected value than you thought, you are pretty much compelled by your prior distribution to stubbornly disbelieve the evidence:

To really flesh out the strangely strong conclusion of these priors, suppose that we lived to see spacecraft intensively colonize the galaxy. There would be a detailed history leading up to this outcome, technical blueprints and experiments supporting the existence of the relevant technologies, radio communication and travelers from other star systems, etc. This would be a lot of evidence by normal standards, but the [finite expected value] priors would never let us believe our own eyes: someone who really held a prior like that would conclude they had gone insane or that some conspiracy was faking the evidence.

Yet if I lived through an era of space colonization, I think I could be convinced that it was real. […] So a prior which says that space colonization is essentially impossible does not accurately characterize our beliefs.

If you put a prior of 10-50 on having an expected value of 1048 (the exact numbers don’t matter too much), you require extraordinary un-meetable levels of evidence to ever convince yourself that you could ever actually have that much impact.

We must either accept that an action has infinite expected value; or we must give it a “stubborn” distribution that massively discounts any evidence that would substantially change our beliefs.

This actually matters

You could say that this concern is merely academic. In practice we don’t formally produce utility distributions for all our actions and then calculate their expected values. And while it’s true that we don’t often use utility distributions explicitly, we still use them implicitly. We could take our beliefs and produce a corresponding utility distribution. And this distribution must either have a thin tail, in which case it’s too stubborn, or it must have a fat tail, in which case its expected utility is infinity.

What to do?

Let’s look at a few possible resolutions. Everything beyond this point is highly speculative and probably useless.

Resolution 1: Accept the infinite distribution

In some cases, people generally behave consistently with assuming a fat-tailed distribution1, in that they readily update on evidence, even when it points to a dramatically different result than what they previously believed. For example, I don’t know anyone who refuses to believe that the known universe contains 1024 stars.

That doesn’t mean we should actually use distributions with infinite expected value, although it hints that maybe we can find a way to use them. Even if we don’t ever use infinite distributions, infinite ethics still poses problems that we need to solve. If we can solve those problems then perhaps we can freely use fat-tailed distributions.

How well can we actually solve them? So far we don’t have any satisfying solutions. Adam Jonsson recently wrote something in the direction of a solution (see also this informal discussion of the result) but we still have a lot of unresolved concerns. But maybe if we can figure out these issues surrounding infinite ethics then we can accept that actions have infinite expected value2.

Resolution 2: Accept the finite distribution

The Charity Doomsday Argument is merely counterintuitive, not ethics-destroyingly decisionmaking-ruiningly terrible. So perhaps it’s the better choice.

In some cases, people behave more consistently with assuming a thin-tailed distribution. For instance, in Pascal’s mugging, people refuse to believe the evidence presented by the mugger. Some people disbelieve the astronomical waste argument for similar reasons, although others disagree.

Resolution 3: Reject utilitarianism

The problems with utility distributions don’t matter if we don’t care about utility, right? Problem solved! Except that non-utilitarianism has tons of other problems.

Resolution 4: Follow your common sense

In practice, we often have no difficulty saying we prefer one action to another. Perhaps we could argue that we’re more confident about this than we are about some abstract claims about probability distributions. If our math gives bad results, then maybe something’s wrong with the math–maybe it doesn’t really apply to the real world.

But lack of rigor makes me uncomfortable. If we have a good reasoning process, we can probably formalize it somehow. The problems raised by this essay demonstrate that I have no idea how to formalize my reasoning. I don’t want to just follow “common sense” because I have so many biases that I can’t properly trust my instincts. So I don’t feel great about resolution #4, even though it’s the only one that appears to work at all.

Once again we have failed to solve any problems but only descended deeper into a hopeless spiral of indecision.

Notes

Thanks to Jake McKinnon, Buck Schlegeris, and Ben Weinstein-Raun for reading drafts of this essay.

  1. “Fat-tailed” is a relative term. Usually any Pareto distribution is considered fat-tailed, even if it has finite expected value. Here I use “fat-tailed” to refer only to distributions with infinite expected value. 

  2. Finite distributions with infinite expected values might actually be easier to deal with than some other problems in infinite ethics. I haven’t investigated this, but one thing that comes to mind is we could say a distribution A is better than distribution B if, for all n, the integral from 0 to n of A is greater than the integral from 0 to n of B. (That obviously doesn’t solve everything but it’s a start.)