Content note: Proofs involving computation and Turing machines. Whether you understand the halting problem is probably a good predictor of whether this post will make sense to you.

I use the terms “program” and “Turing machine” interchangeably.

Update 2016-03-22: I had previously written this post about “consciousness verifiers”. In computability theory, “verifer” has a specific meaning that’s different from how I was using it. I have replaced “verifier” with “decider”.

What is consciousness?

Consciousness is probably an execution on a Turing machine. More specifically, it’s probably a series of steps executed by a Turing machine. I explain my reasoning for this claim in “Observations on Consciousness”; for here I will let it stand without justification.

The Hard Problem and consciousness checkers

The hard problem of consciousness asks, why are things conscious at all? What makes them consciousness? If consciousness is a series of steps executed on a Turing machine, then solving the hard problem of consciousness is equivalent to writing a program that, given a series of steps on a Turing machine, can verify whether those steps are conscious.

We can write a different sort of consciousness decider (CD) that determines whether a given Turing machine will produce conscious computations. And we can say that a program is conscious if it ever performs conscious computations. I will use this definition of a consciousness decider for the my discussion here. Now, this type of CD probably cannot exist in the general case because that would require solving the hard problem of consciousness 1. But we can write a CD that works on programs such that the CD can determine whether they halt. This probably covers almost all cases we care about: a smart human can solve the halting problem over almost all short programs written by humans, and a smarter superhuman with much more RAM could probably solve the halting problem over pretty much all programs that any human has ever written. Therefore it’s reasonable to expect that there could exist a consciousness decider that works on almost all beings we could care about.

Proof of consciousness

Theorem. A consciousness decider must be conscious.

Proof. Consider a program CD such that given any program X (subject to restrictions), CD(X) returns true if and only if X is conscious.

Recall from a previous section that “X is conscious” means that, at some point during its execution, X performs a series of steps that produce consciousness.

The set S of possible programs X is subject to restrictions such that it is possible to determine whether X halts for any X in S.

Now define a program A as follows:

program A {
    if CD(A) then halt()
    else doSomethingConscious()
}

Suppose CD(A) returns false. Then A does something conscious, which means CD incorrectly judged A not to be conscious. But CD is defined to be a perfect consciousness decider, so this is a contradiction. Either no such program CD exists, or if it does exist, it cannot determine A to be non-conscious. Therefore A is conscious.

All A does is call CD(A), check its result, and halt. If A is conscious but CD(A) is not, that means we can make CD(A) conscious simply by wrapping an if statement around it. We don’t know what consciousness is, but it probably isn’t the sort of thing where you can turn it on by wrapping an if statement around a computation. Therefore CD(A) is itself probably conscious.

Note: This does not mean that CD is conscious for any possible input. All it proves is that there exists at least one input for which CD performs conscious computations.

Does this prove too much?

I see no explicit flaws in this proof, but I may be able to counter it by showing that it proves too much. This proof shows that a consciousness decider must be conscious; but beyond that, it shows that a Y decider must be Y for any property Y such that adding an if statement around it does not change the property.

Let’s consider an example of such a property and see if the proof makes sense. Let’s take the property Y where Y(X) returns true if and only if X branches at least N times (for some arbitrary large number N). Does the proof still hold?

Here adding an if statement to program X cannot cause Y(X) to change as long as N is sufficiently large and X is sufficiently small.

Let’s rewrite our program A as A’:

program A' {
    if Y(A') then halt()
    else branchNTimes()
}

To avoid a paradox, Y(A’) must return true. And in fact, Y(A’) does return true. Y counts the number of times that A’ branches, and for every time A’ branches (at least until it has branched N times), Y must branch at least once (because it behaves differently depending on whether A’ branches). But A’ calls Y, so A’ branches every time Y branches. Therefore Y continues branching until it has counted N branches.

This proof holds for the property that the input program branches at least N times. Therefore, by (very) weak induction, it holds for any property fitting the requirements of the proof.

Is this result trivial?

It’s possible that for CD to determine whether a program X is conscious, it must execute X until either X does something conscious or X halts. In that case, if X does something conscious, CD(X) will simulate X doing something conscious and therefore will itself do something conscious. In that case, really all this says is “When a consciousness decider runs a perfect simulation of a conscious computation, that simulation must be conscious” but this result is trivial.

But if the only way to verify consciousness of a program is to run the program, that effectively means there is no way to verify consciousness from the outside. In other words, the only way for me to know if you’re conscious is for me to be you (by running a perfect simulation of you inside my head). Perhaps there are special cases where a program can be determined conscious or non-conscious just by analyzing the source code, but a special-case consciousness decider is not necessarily conscious because the proof given above doesn’t work on it.

This proof means that any general-purpose consciousness decider must be conscious. Either a consciousness decider can only work by running a complete simulation of a program, which is non-obvious and interesting; or even if it doesn’t run a complete simulation, it must still be conscious, which is also non-obvious and interesting. I suspect that the former is true.

Independent consciousnesses

Suppose that a consciousness decider must run a complete simulation of a program to check if it’s conscious. It’s actually plausible that the simulation is conscious at one level and the CD is independently conscious at a second level.

To justify this, consider your intuitions about consciousness. Suppose you are given instructions for a Turing machine and a bunch of paper, and you have to execute the Turing machine to determine if it’s conscious. You go through the execution of the Turing machine, writing down a bunch of ones and zeros, and find that you are producing conscious computations. That means there’s something about what you’re doing that is producing consciousness. But the consciousness does not come from your brain. You experience your own consciousness, but you do not experience what the Turing machine on the paper is experiencing. If it experiences something, it’s at a different level from your own experience.

It’s plausible that a general consciousness decider would work the same way. It must simulate a conscious program to determine that it’s conscious, but the simulation is conscious independently of the CD.

This relies on the assumption that if you were to simulate a conscious Turing machine, you would not experience its consciousness. This is based on an intuition. I feel this intuition fairly strongly. But I also have a fairly strong intuition that ones and zeros written on a piece of paper cannot be conscious, and one of these two intuitions is almost certainly wrong. So it’s conceivable that if you did simulate a conscious Turing machine, you would experience its consciousness.

Conclusion

Nothing is resolved and everything is still confusing.

Notes

  1. It is not actually proven that such a program cannot exist. It’s possible that for every program A where a CD cannot determine whether it halts, A is conscious. In this case, a CD could exist: if it fails to determine whether a program halts then that program is conscious. This is actually not that implausible: non-verifiably-halting programs and conscious programs are both sufficiently complex that we don’t understand them well, and they may overlap in a non-obvious way.