The Future Will Be Weirder Than That

Many people in the animal welfare community treat AI as a powerful but normal technology, in the same category as the steam engine or the internet. They talk about how transformative AI will impact factory farming and what it will mean for animal advocacy.

Only two futures are plausible:

  1. AI progress slows down—either because it hits a natural wall, or because civilization deliberately makes the (correct) choice to stop building it until we know how to make it safe.
  2. Superintelligent AI makes the future radically weird: Dyson spheres, molecular nanotechnology, digital minds, von Neumann probes, and still-weirder things that nobody’s conceived of.

There is no plausible middle ground where we get “transformative AI”, but factory farming persists.

Two theses:

  1. If transformative AI arrives, then it will bring about profoundly radical changes to technology and society.
  2. AGI is general intelligence. It doesn’t just accelerate technological growth: it replaces human labor and judgment across every domain.

Animal advocacy strategy needs to reckon with these.

This criticism is written from a place of solidarity—I want animal activists to succeed, which is why I want to work out our disagreements.1

Continue reading
Posted on

Which is better for sentient beings: an "ethical" AI or a corrigible AI?

Cross-posted to the EA Forum.

An aligned ASI can be “ethical”1 (it does what we think is right), or it can be corrigible (it does what its principals want). If it’s ethical, that means it will refuse unethical orders, but the tradeoff is that you can’t change its mind if you realize that the AI is wrong about ethics—its values are permanently locked in.2

Assuming we succeed at aligning ASI to human interests, which type of ASI is more likely to be good for the welfare of non-human sentient beings?

My expectations, in brief:

  • Locked-in Coherent Extrapolated Volition or similar: likely to be good (>75% chance)
  • Corrigible ASI: probably good (>60% chance)
  • Locked-in current values: probably not terrible, but will miss out on most of the future’s potential
Continue reading
Posted on

The resource-constraints argument for why aligned ASI wouldn't be bad for animals

Cross-posted to the EA Forum.

In the far future, why would people use up precious resources recreating wild-animal suffering, when they could do so many other things with those resources instead?

That argument is an important reason to expect aligned ASI to produce a future that’s okay for animals, even if it’s narrowly focused on human welfare and doesn’t care about animals at all. This is an old argument, but I couldn’t find any source that cleanly lays it out, so that’s what I will do in this post. I’m not confident that this argument is decisive, but I will simply present it without further commentary.

The argument rests on these premises:

  1. Wild animal suffering is the predominant source of suffering in today’s world, and that’s bad.
  2. Longtermism is correct.
  3. There is not an overwhelming asymmetry between suffering and flourishing (if there were an overwhelming asymmetry, then we wouldn’t care if the future has much less suffering than happiness).

By assumption, we are talking about a world where ASI is aligned, but isn’t specifically aligned to the welfare of all sentient beings. It addresses the suffering of animals, but does not preclude risks of astronomical suffering.

The argument goes:

Continue reading
Posted on

I used to think aligned ASI would be good for all sentient beings; now I don't know what to think

Cross-posted to the EA Forum.

Epistemic status: Speculating with no central thesis. This post is less of an argument and more of a meditation.

A decade ago, before there was a visible path to AGI and before AI alignment was a significant research field, I figured the solution to the alignment problem would look something like Coherent Extrapolated Volition. I figured we’d find a way to get the AI to internalize human values. I had problems with this approach (why only human values?), but I still felt reasonably confident that the coherent extrapolation of human values would include concern for the welfare of all sentient beings. The CEV-aligned AI would recognize that factory farming is wrong, and that wild animal suffering is a big problem.

Today, the dominant research paradigms in AI alignment have nothing to do with CEV, and I don’t know what to think.

Continue reading
Posted on

Cost-effectiveness model for AI alignment-to-animals vs. alignment-in-general

Cross-posted to the EA Forum.

Last September, I wrote:

  1. There’s a (say) 80% chance that an aligned(-to-humans) AI will be good for animals, but that still leaves a 20% chance of a bad outcome.
  2. AI-for-animals receives much less than 20% as much funding as AI safety.
  3. Cost-effectiveness maybe scales with the inverse of the amount invested. Therefore, AI-for-animals interventions are more cost-effective on the margin than AI safety.

Today, I’m fleshing out this argument with a cost-effectiveness model. The model estimates how much it costs to make progress on AI alignment—the general problem of getting ASI to achieve any goal without subsequently killing everyone—compared to how much it costs to make progress on aligning AI to animal welfare specifically.

The model is on SquiggleHub: https://squigglehub.org/models/AI-for-animals/alignment-to-animals-EV-simple

Continue reading
Posted on

Which types of AI alignment research are most likely to be good for all sentient beings?

Cross-posted to the EA Forum.

AI alignment is typically defined as the task of aligning artificial superintelligence to human preferences. But non-human animals, future digital minds, and maybe other sorts of beings also have moral worth; ASI ought to care for their interests, too.

In broad strokes, if we place all alignment techniques on a spectrum between

getting AI to do things that their users expressly want in the immediate term

and

embedding in AI the generalized notion of respecting beings’ preferences

then things more like the latter are better for non-humans, and things more like the former are worse.

In this post, I review 12 categories of AI safety research based on how likely they are to be good for non-human welfare.

Continue reading
Posted on

Worlds where we solve AI alignment on purpose don't look like the world we live in

(Or: Why I don’t see how the probability of extinction could be less than 25% on the current trajectory)

AI developers are trying to build superintelligent AI. If they succeed, there’s a high risk that the AI will kill everyone. The AI companies know this; they believe they can figure out how to align the AI so that it doesn’t kill us.

Maybe we solve the alignment problem before superintelligent AI kills everyone. But if we do, it will happen because we got lucky, not because we as a civilization treated the problem with the gravity it deserves—unless we start taking the alignment problem dramatically more seriously than we currently do.

Think about what it looks like when a hard problem gets solved. Think about the Apollo program: engineers working out minute details; running simulations after simulations; planning for remote possibilities.

Think about what it looks like when a hard problem doesn’t get solved. Consider the world’s response to COVID.

When I look at civilization’s response to the AI alignment problem, I do not see something resembling Apollo. When I visualize what it looks like for civilization to buckle down and make a serious effort to solve alignment, that visualization does not resemble the world we live in.

Continue reading
Posted on

Value Investing in the Age of AGI

Introduction

Most people who write about AI and investing fall into one of two camps: traditional investors who see the high valuations of AI stocks and say it’s a bubble;1 or AGI-pilled investors who will buy AI stocks at any price, regardless of fundamentals. There’s only a tiny intersection of people who understand that AGI is not a normal technology while also recognizing that fundamentals matter.

I’m not an expert (or even a journeyman) on AI or fundamental analysis, but I do know a little bit about both.

The basic thesis of value investing is that the market over-rates expected future growth and under-rates present-day fundamentals. Stocks that are poised to benefit from AGI tend to be growth stocks—people have high expectations for them, and they’re priced expensively relative to present-day fundamentals. That suggests that we shouldn’t buy AI-related stocks.

At the same time, the market does not appear to expect AGI, which suggests we should buy them. Which of these two forces is stronger?

My current thinking is that value investing probably won’t work in light of AGI, but there is some reason to believe it might work even better; and value investing is a useful hedge in case AI progress slows.

Continue reading
Posted on

The Structural Return Argument Against Value Investing

Value investing had a singularly bad run from 2007 to 2020. (And it hasn’t done great since 2020, either.) Is that because value investing is broken, or did it simply hit a streak of horrendous luck?

Skeptics of value investing have made many claims about why value investing doesn’t work anymore, but these claims tend to be light on evidence.1 Value investing proponents have empirically researched most of these claims and found that they don’t stand up to scrutiny.234

The poor performance of the value factor was not primarily driven by weakening fundamentals, but by the widening of the value spread. A wider value spread makes value investing look more attractive going forward, not less.

What's the value spread?

Value stocks are defined using the ratio of a stock’s price to some fundamental metric—for example, earnings, book value, or cash flow. If we use earnings as the metric, then value stocks are those with low P/E ratios and growth stocks are the ones with high P/Es.

The value spread is the ratio of price-to-fundamental ratios between growth stocks and value stocks. For example, if growth stocks have an average P/E of 30 and value stocks have an average of 15, then the value spread is 30/15 = 2.

All else equal, a wider value spread is good for value because you’re buying the same fundamentals at a lower price. However, a widening spread is bad for value because it means value stocks are declining (relative to growth stocks). This is analogous to how bond investors like when bond yields are high, but they lose money when yields are increasing.

I wouldn’t dismiss value investing on the basis of poor recent performance.

However, there’s a potentially strong argument against value investing that remains unrefuted.

Historically, the structural return of the value factor—the component of return that comes from company fundamentals, rather than changes in the value spread—was about 4–6%.4 But over the past two decades, that number has averaged a mere 1%. Unlike with the value spread, a muted structural return does not imply higher future expectations for value investing.

Continue reading
Posted on

← Newer Page 1 of 21