Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they’re true, so I’m not even going to try.

Cross-posted to the Effective Altruism Forum.

If welfare is important, and if the value of welfare scales something-like-linearly, and if there is nothing morally special about the human species1, then these two things are probably also true:

  1. The best possible universe isn’t filled with humans or human-like beings. It’s filled with some other type of being that’s much happier than humans, or has much richer experiences than humans, or otherwise experiences much more positive welfare than humans, for whatever “welfare” means. Let’s call these beings Welfareans.
  2. A universe filled with Welfareans is much better than a universe filled with humanoids.

(Historically, people referred to these beings as “hedonium”. I dislike that term because hedonium sounds like a thing. It doesn’t sound like something that matters. It’s supposed to be the opposite of that—it’s supposed to be the most profoundly innately valuable sentient being. So I think it’s better to describe the beings as Welfareans. I suppose we could also call them Hedoneans, but I don’t want to constrain myself to hedonistic utilitarianism.)

Even in the “Good Ending” where we solve AI alignment and governance and coordination problems and we end up with a superintelligent AI that builds a flourishing post-scarcity civilization, will there be Welfareans? In that world, humans will be able to create a flourishing future for themselves; but beings who don’t exist yet won’t be able to give themselves good lives, because they don’t exist.

My guess is that a tiny subset of crazy people (like me) will spend their resources making Welfareans, who will end up occupying only a tiny percentage of the accessible universe, and as a result, the future will be less than 1% as good as it could have been.

(And maybe my conception of Welfareans will be wrong, and some other weirdo will be the one who makes the real Welfareans.)

I want the future to be nice for humans, too. (I’m a human.) But all we need to do is solve AI alignment (and various other extremely difficult, seemingly-insurmountable problems), and humans will turn out fine. Welfareans can’t advocate for themselves, and I’m afraid they won’t get the advocates they need.

There is one reason why Welfareans might inherit most of the universe. Generally speaking, people don’t care about filling all available space with Dyson spheres to maximize population. They just want to live in their little corner of space, and they’d be happy to let the Welfareans have the rest.

It’s probably true that most people aren’t maximizers. But some people are maximizers, and most of them won’t want to maximize Welfareans; they’ll want to maximize some other thing. A lot of people will want to maximize how much of the universe is captured by humans or post-humans (or even just their personal genetic lineage). Mormons will want to maximize the number of Mormons or something. There are enough maximizing ideologies that I expect Welfareans to get squeezed out.

So what can we do for the Welfareans?

There are two problems:

  1. Who even are the Welfareans?
  2. How do we ensure that the Welfareans get their share of the future’s resources?

Solving problem #1 approximately requires solving ethics (or, I guess, axiology). I’m not going to say more about that problem; I hope we can agree that it’s hard.

For problem #2, the first answer that comes to mind is “make a power grab for as many resources as possible so I can give them to Welfareans later on”. But I’m guessing that if we solve ethics (as per problem #1), The Solution To Ethics will include a bit that says something along the lines of “don’t take other people’s stuff”. And there are only like three of us who would even care about Welfareans, so I don’t think we’d get very far anyway.

So how do we increase Welfareans’ share of resources, but in an ethical manner? I don’t know. I’m going to start with “write this essay about Welfarean welfare”.

Posted on

Notes

  1. In my first draft, the opening sentence said “If something like utilitarianism is true, …”. But this is an unnecessarily strong premise. You don’t need utilitarianism, you just need linear aggregation + antispeciesism. A non-consequentialist can still believe that more welfare is better (all else equal). Such a person would still want to maximize the aggregate welfare of the universe, subject to staying within the bounds of whatever moral rules they believe in.