Many people in the animal welfare community treat AI as a powerful but normal technology, in the same category as the steam engine or the internet. They talk about how transformative AI will impact factory farming and what it will mean for animal advocacy.

Only two futures are plausible:

  1. AI progress slows down—either because it hits a natural wall, or because civilization deliberately makes the (correct) choice to stop building it until we know how to make it safe.
  2. Superintelligent AI makes the future radically weird: Dyson spheres, molecular nanotechnology, digital minds, von Neumann probes, and still-weirder things that nobody’s conceived of.

There is no plausible middle ground where we get “transformative AI”, but factory farming persists.

Two theses:

  1. If transformative AI arrives, then it will bring about profoundly radical changes to technology and society.
  2. AGI is general intelligence. It doesn’t just accelerate technological growth: it replaces human labor and judgment across every domain.

Animal advocacy strategy needs to reckon with these.

This criticism is written from a place of solidarity—I want animal activists to succeed, which is why I want to work out our disagreements.1

Cross-posted to the EA Forum.

Contents

AI makes the future weird

Much has been written about why we should expect AI to make the future weird, and soon:

Daniel Kokotajlo wrote a vivid illustration of what it would feel like to live alongside superhuman AI. An excerpt:

In the future, there will be millions, and then billions, and then trillions of broadly superhuman AIs thinking and acting at 100x human speed (or faster). If all goes well, what might it feel like to live in the world as it undergoes this transformation?

Analogy: Imagine being a typical person living in England from 1520 to 2020 (500 years) but experiencing time 100x slower than everyone else, so to you it feels like only five years have passed:

Year 1 (1520–1620). A year of political turmoil. In February, Henry VIII breaks with Rome. By March, the monasteries are dissolved. In May, Mary burns Protestants; by the end of May, Elizabeth reverses everything again. Three religions of state in the span of a season. In September, the Spanish Armada sails and fails. Jamestown is founded around November. The East India Company is chartered. But the texture of life is identical in December to what it was in January. You still read by candlelight, travel by horse, communicate by letter. Your religious opinions may have flip-flopped a bit but you are still Christian. The New World is interesting news but nothing more.

[…]

Year 4 (1820–1920). The world breaks. In January, railways appear — steam-powered carriages on iron tracks. By February they’re everywhere. Slavery is abolished. The telegraph arrives in March: messages transmitted instantaneously by electrical signal. In May, Darwin publishes On the Origin of Species. Now people are saying maybe we’re all descended from monkeys instead of Adam and Eve. You don’t believe it.

You move to a city and work in a factory; you are still poor, but now your job is somewhat better and differently dirty. In July, you pick up a telephone and hears a human voice from another city through a wire. In August, electric light banishes the darkness that has structured every human evening since the beginning of the species. That same month, you see an automobile. People say it will make horses obsolete, but that doesn’t happen; months later you still see plenty of horses.

In November, the Wright Brothers fly. Up until now you thought that was impossible. The next month, the Great War happens. Machine guns, poison gas, tanks, aircraft. Several of your friends die.

Reflecting at the end of the year, you are struck by how visibly different everything is. You live in a city and work a factory instead of a farm. You ride around in horseless carriages. You aren’t as poor; numerous inventions and contraptions have improved your quality of life. New ideas have swept your social circles — atheism, communism, universal suffrage. It feels like a different world.

We don’t know where we would be with another 500 years of scientific and technological advancement. At minimum, we can reasonably predict that we would figure out how to build advanced technologies like molecular nanotechnology and self-replicating probes—which are possible in theory2, but far out of reach of our current capabilities. Superhuman AI with a 100x speedup could develop those technologies in five years or so. Maybe more, maybe less3, but it certainly wouldn’t take 500 years.

If you can build self-replicating probes, then you can trivially create self-growing cultivated meat at a lower price point than animal meat. But saying self-replicating probes can make cultivated meat is like saying electricity can heat up food faster than a wood fire—yes it can, but that’s barely scratching the surface of what it can do.

Even in the relatively normal world where AI (somehow) caps out at the intelligence of a 99th percentile human, the world will look extraordinarily different. At minimum, we’d see close to a 100% unemployment rate. In all likelihood, the political, economic, and social environment as we know it would cease to exist.

AGI = intelligence

People often talk as if AGI is an R&D-accelerator or an economic-growth-engine. It’s not: AGI is intelligence. AGI is general: it can do anything that you and I can do, but faster, cheaper, and better.

Below are some excerpts from posts on AIxAnimals that don’t fully reckon with the weirdness of AI:

When clean meat arrives (if it does), the movement will need skilled campaigners, policy expertise, organisational infrastructure, relationships with policymakers, experienced leadership, and research to understand this whole TAI situation. (source)

You don’t need campaigners if AGI will be a better campaigner than you. You don’t need policy expertise if AGI will know more about policy than you. This passage treats AGI as a machine that accelerates scientific R&D, but that’s not what AGI is. AGI is intelligence.

We are launching a pooled fund for projects at the AIxAnimals intersection. […] [W]e are most interested in projects that fall under the following categories: [abridged]

  • AI literacy workshops or training programs for nonprofit staff, building on the few initiatives that already exist and expanding their reach and depth.
  • AI-powered grant-finding and drafting systems focused on adjacent sources of funding.
  • Horizon-scanning studies mapping how AI might enable the large-scale farming of novel species (e.g., cephalopods, insects).
  • Policy analysis identifying how public AI investments (e.g., agricultural innovation funds) could be redirected to support alternative proteins.

(source)

Those are not all bad ideas, per se, but they have an expiration date. AI literacy workshops become less useful as AI becomes smarter (the smarter the AI, the easier it is to work with4), and once they surpass human workers, AI literacy will become entirely irrelevant. I would be much more interested in an RFP that focuses on superintelligence, rather than on the (probably short) transition period between 2026 and AGI.

[Cultivated meat] bans are primarily driven by agricultural lobby pressure. There is no obvious mechanism by which AGI reverses these political dynamics directly. If anything, if cultivated meat becomes more viable and widely produced, you could just as reasonably expect greater pushback from the agricultural lobby. (source)

(emphasis mine)

There is no obvious mechanism by which 2026-era political dynamics still have any force after the emergence of AGI! Even granting that we solve the alignment problem, describing a post-AGI world where current law still applies is itself an open problem.5

I’m picking on animal activists because that’s who I most want to see succeed, but it’s not just animal activists who underestimate the weirdness of AI. There’s a common notion that transformative AI will fully automate labor, while capital owners will reap the benefits—their property rights and shareholder rights will be preserved post-AGI. Other people have already written extensively about why this notion is implausible: see Dos Capital by Zvi Mowshowitz and this long tweet [archive] by Tomás Bjartur.

Cope level 1: My labour will always be valuable!

Cope level 2: That’s naive. My AGI companies stock will always be valuable, may be worth galaxies! We may need to solve some hard problems with inequality between humans, but private property will always be sacred and human.

-Jan Kulveit

If the future will be weird, what should animal activists do?

That’s the big question.

Some questions, like what strategies animal activists should pursue post-AGI, are nearly impossible to answer. AGI will be better at strategizing than you will, and you can’t predict what strategies it would come up with. (If you can predict what chess moves Magnus Carlsen will make, then you can beat Magnus Carlsen at chess.)

Other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the future—either because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology, linked previously). I can predict that if ASI is misaligned, then it will wipe out all life on earth.

Some questions that are still worth asking in light of the weirdness of the future:

  • What’s going on with AI alignment, and how does alignment work relate to non-human welfare?
  • How likely is it that aligned AI will be good for non-human welfare, and how does that probability vary based on timing or the method of alignment? (See my previous writings: Which approaches are most likely to be good for all sentient beings?; Which is better for sentient beings: an “ethical” AI or a corrigible AI?)
  • How could AI be influenced to expand its circle of compassion? (This question also relates to AI alignment in that it depends on the ability to reliably direct AI at a goal.)
  • For other actions aimed at preventing human extinction—AI governance work, advocating for regulations, etc.—what effects might they have on non-human welfare?
  • The meta-question: What other meaningful questions can we ask?

Previously, I wrote a list of possible strategies for having positive impact on animals in light of ASI, with some brief pros and cons. See also A shallow review of what transformative AI means for animal welfare by Lizka Vaintrob and Ben West. I second their recommendations that animal activists should:

  • Dedicate some amount of (ongoing) attention to the possibility of animal welfare lock-ins.
  • Pursue other exploratory research on what transformative AI might mean for animals & how to help.

I also second their recommendation that animal activists should NOT focus on farmed animals when thinking about the long-run future of animals.

My high-level recommendations for how to plan for the future:

  • Prepare for the possibility that, once AI is sufficiently advanced, humans will have no control over the future.
  • Don’t think of AGI as an R&D accelerator. Think of it as a general intelligence.

Notes

  1. I’m not confident that this post does a good job of addressing where “AI-as-normal-technology” animal activists are coming from. But I figure it’s better to hit “submit” and engage in public dialogue than to tinker with a draft forever until my arguments are perfect. 

  2. Eric Drexler’s book Nanosystems is about why molecular nanotechnology is possible in theory. We know for sure that self-replicating probes are possible because life exists. 

  3. More because some kinds of progress can’t be parallelized. Less because the “100x speedup” assumes AI is faster than humans, but doesn’t account for the fact that it’s also smarter; and 500 years is an upper bound on how long it would take humanity to develop those technologies. 

  4. In 2023, you needed to learn prompt engineering tricks to elicit good work out of LLMs. In 2026, you don’t.

    In 2023, LLMs could write boilerplate code for you, like a fancy auto-complete. In 2026, LLMs can write entire apps with no supervision. 

  5. I should also respond to the Caveats section from the quoted article, because it explicitly brings this up:

    [W]e don’t address scenarios in which AGI drastically reshapes institutional and political dynamics. A sufficiently capable AI might find creative strategies for regulatory reform or public persuasion that we can’t currently foresee. Governments and agencies could be restructured, approval frameworks could be overhauled, and entirely new institutional designs could emerge that bear little resemblance to current processes. As above, we focus on existing institutional structures because they allow actionable analysis, but we acknowledge this is a limitation.

    It is difficult to predict how governments and institutions will change post-AGI. If you have extreme uncertainty, then you might reasonably decline to make a prediction. But predicting that governments and institutions won’t change is still a prediction!

    Rather than predicting no change, here’s something else I could say to allow actionable analysis:

    My assumption is that first ASI will be a constitutional AI that becomes a world government singleton, and its values will be determined by its constitution.

    This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although still unlikely).