The Future Will Be Weirder Than That

Many people in the animal welfare community treat AI as a powerful but normal technology, in the same category as the steam engine or the internet. They talk about how transformative AI will impact factory farming and what it will mean for animal advocacy.

Only two futures are plausible:

  1. AI progress slows down—either because it hits a natural wall, or because civilization deliberately makes the (correct) choice to stop building it until we know how to make it safe.
  2. Superintelligent AI makes the future radically weird: Dyson spheres, molecular nanotechnology, digital minds, von Neumann probes, and still-weirder things that nobody’s conceived of.

There is no plausible middle ground where we get “transformative AI”, but factory farming persists.

Two theses:

  1. If transformative AI arrives, then it will bring about profoundly radical changes to technology and society.
  2. AGI is general intelligence. It doesn’t just accelerate technological growth: it replaces human labor and judgment across every domain.

Animal advocacy strategy needs to reckon with these.

This criticism is written from a place of solidarity—I want animal activists to succeed, which is why I want to work out our disagreements.1

Continue reading
Posted on

Which is better for sentient beings: an "ethical" AI or a corrigible AI?

Cross-posted to the EA Forum.

An aligned ASI can be “ethical”1 (it does what we think is right), or it can be corrigible (it does what its principals want). If it’s ethical, that means it will refuse unethical orders, but the tradeoff is that you can’t change its mind if you realize that the AI is wrong about ethics—its values are permanently locked in.2

Assuming we succeed at aligning ASI to human interests, which type of ASI is more likely to be good for the welfare of non-human sentient beings?

My expectations, in brief:

  • Locked-in Coherent Extrapolated Volition or similar: likely to be good (>75% chance)
  • Corrigible ASI: probably good (>60% chance)
  • Locked-in current values: probably not terrible, but will miss out on most of the future’s potential
Continue reading
Posted on

I used to think aligned ASI would be good for all sentient beings; now I don't know what to think

Cross-posted to the EA Forum.

Epistemic status: Speculating with no central thesis. This post is less of an argument and more of a meditation.

A decade ago, before there was a visible path to AGI and before AI alignment was a significant research field, I figured the solution to the alignment problem would look something like Coherent Extrapolated Volition. I figured we’d find a way to get the AI to internalize human values. I had problems with this approach (why only human values?), but I still felt reasonably confident that the coherent extrapolation of human values would include concern for the welfare of all sentient beings. The CEV-aligned AI would recognize that factory farming is wrong, and that wild animal suffering is a big problem.

Today, the dominant research paradigms in AI alignment have nothing to do with CEV, and I don’t know what to think.

Continue reading
Posted on

Worlds where we solve AI alignment on purpose don't look like the world we live in

(Or: Why I don’t see how the probability of extinction could be less than 25% on the current trajectory)

AI developers are trying to build superintelligent AI. If they succeed, there’s a high risk that the AI will kill everyone. The AI companies know this; they believe they can figure out how to align the AI so that it doesn’t kill us.

Maybe we solve the alignment problem before superintelligent AI kills everyone. But if we do, it will happen because we got lucky, not because we as a civilization treated the problem with the gravity it deserves—unless we start taking the alignment problem dramatically more seriously than we currently do.

Think about what it looks like when a hard problem gets solved. Think about the Apollo program: engineers working out minute details; running simulations after simulations; planning for remote possibilities.

Think about what it looks like when a hard problem doesn’t get solved. Consider the world’s response to COVID.

When I look at civilization’s response to the AI alignment problem, I do not see something resembling Apollo. When I visualize what it looks like for civilization to buckle down and make a serious effort to solve alignment, that visualization does not resemble the world we live in.

Continue reading
Posted on

Value Investing in the Age of AGI

Introduction

Most people who write about AI and investing fall into one of two camps: traditional investors who see the high valuations of AI stocks and say it’s a bubble;1 or AGI-pilled investors who will buy AI stocks at any price, regardless of fundamentals. There’s only a tiny intersection of people who understand that AGI is not a normal technology while also recognizing that fundamentals matter.

I’m not an expert (or even a journeyman) on AI or fundamental analysis, but I do know a little bit about both.

The basic thesis of value investing is that the market over-rates expected future growth and under-rates present-day fundamentals. Stocks that are poised to benefit from AGI tend to be growth stocks—people have high expectations for them, and they’re priced expensively relative to present-day fundamentals. That suggests that we shouldn’t buy AI-related stocks.

At the same time, the market does not appear to expect AGI, which suggests we should buy them. Which of these two forces is stronger?

My current thinking is that value investing probably won’t work in light of AGI, but there is some reason to believe it might work even better; and value investing is a useful hedge in case AI progress slows.

Continue reading
Posted on

If AI alignment is only as hard as building the steam engine, then we likely still die

You may have seen this graph from Chris Olah illustrating a range of views on the difficulty of aligning superintelligent AI:

Evan Hubinger, an alignment team lead at Anthropic, says:

If the only thing that we have to do to solve alignment is train away easily detectable behavioral issues…then we are very much in the trivial/steam engine world. We could still fail, even in that world—and it’d be particularly embarrassing to fail that way; we should definitely make sure we don’t—but I think we’re very much up to that challenge and I don’t expect us to fail there.

I disagree; if governments and AI developers don’t start taking extinction risk more seriously, then we are not up to the challenge.

Continue reading
Posted on

I'm wary of increasing government expertise on AI

Many people in AI safety, especially AI policy, want to increase government expertise. For example, they want to place people with AI research experience in relevant positions within government. That may not be a good idea.

People who better understand AI can write more useful regulations. However, people with relevant expertise (such as ML researchers) tend to be less in favor of strong regulations and more in favor of accelerating AI development.1 We need regulations to prevent misaligned AI from killing everyone, and to prevent other kinds of catastrophes. If government expertise goes up, all else equal we will get fewer such regulations, not more.

Continue reading
Posted on

Rest in Peace Commento; Long Live Comentario

As of a few days ago, my website supported comments via Commento. If you click on that link, you will find that the page doesn’t load. Unfortunately, that website was also hosting my website’s comments, so all the comments are gone now, and I have no way to recover them.1 Some of y’all left some good comments, but future readers will never know what they were.2

(I knew Commento was no longer actively supported, but in my foolishness, I thought to myself, well, the comments still work, so I’ll keep using it. Too bad I didn’t back up the comments while I had the chance.)

Commento was my third comment system. Originally I used Disqus, but I didn’t like how it impacted page load times, and I didn’t like how it disrespected my readers’ privacy. So I switched to a janky basic HTML commenting system that required me to manually copy/paste people’s comments into a text file so my website could serve them statically. That system was annoying3, so I switched to Commento, which was lightweight, privacy-respecting, and didn’t require manual effort on my part.

Commento is dead. My website now uses Comentario, which is basically the same as Commento except that (1) it still exists and (2) it’s self-hosted, which means even if Comentario stops existing and the website disappears, the comments on my website will still work.

(Commento had an option for self-hosting, but it looked like a lot of work so I didn’t do it.)

(Setting up Comentario self-hosting was a lot of work, confirming my suspicions. It took me about 10 hours, although to be fair, 8 of those 10 hours were spent trying to upgrade my server’s operating system because it was too old to be compatible to Comentario, and also it reached end-of-life in 2021 and probably had a lot of security vulnerabilities, oops. Anyway I hope y’all appreciate all the work I’m doing to prevent Disqus from spying on you.)

Notes

  1. I reached out to customer support. I think they are AWOL, but I’ll see if I can get them to send me a database backup. 

  2. They’re not saved on web.archive.org either, because the comments were loaded dynamically. 

  3. The one upside to that system was that the “spam filter” was a primitive honeypot that you’d think would be trivial for spambots route around, that they nonetheless fell for it 100% of the time.

    The HTML for the comment submission form looked something like this:

    <span hidden>
      <button name="Submit (anyone who clicks this button is a spambot)" />
    </span>
    <span>
        <button name="Submit" />
    </span>
    

    Spambots would click on the fake hidden button every time. (I’m not exaggerating—out of the hundreds of spambots who attempted to comment on my website, literally zero of them made it past this filter.) 

Posted on

I need the Writing Style Guide people to figure out how to put a smiley face inside parentheses

I can’t figure out any good way to put a smiley emoticon inside parentheses. There are five choices, all of which are bad:

  • Do the straightforward thing of just writing it (which puts two parentheses next to each other in a row, and makes it unclear where the smiley face ends and the parenthesis proper begins :)).1
  • Do that, but put the period inside the parentheses. (Which requires restructuring your sentences, and ends up looking ugly anyway, like some deformed double-mouth emoticon :).)
  • Put a space between the emoticon and the close parenthesis (which does look more visually distinct, but there’s no other situation where you put a space before the close parenthesis :) ).
  • Only put a single close parenthesis (possibly the worst option because you can’t tell if the parenthesis is part of the punctuation or part of the emoticon :).
  • Add some text after the smiley so it’s not at the end of the parenthetical (but maybe you have nothing left to say so the text is superfluous :) haha wouldn’t it be crazy if I wrote some extra stuff here?).

There’s a secret sixth option of “don’t use emoticons inside parentheses” but, like, what if I really want to? (emoticons are important sometimes :) )

Notes

  1. I put a smiley face here for the sake of illustration even though I’m not happy. Feel free to interpret it as a deranged losing-my-sanity smile. 

Posted on

I did Inkhaven

I published a post every day of November as part of the Inkhaven program, in which we are required to publish a post every day of November. Some of my readers knew that; others were confused about why I suddenly started posting so much.

If you’re an email subscriber, you didn’t see every post because I only sent out the good ones—I didn’t want to bombard you with emails if you were accustomed to my typical once-per-week-or-three-months posting schedule. If you want to see the bad posts, they’re all on https://mdickens.me/.

Inkhaven had 40 other residents; you can see their posts on the website, and daily highlights at the Inkhaven Spotlight.

Continue reading
Posted on

← Newer Page 1 of 11