Proof of Rice's theorem

Recall the formal statement of Rice’s theorem:

We will use the notation \([n]\) for the \(n\)th Turing machine under some fixed numbering system. Each such machine induces a function, which we will also write as \([n]\) where this is unambiguous due to context; then it makes sense to write \([n](m)\) for the value that machine \([n]\) outputs when it is run on input \(m\).

Let \(A\) be a non-empty, proper noteThat is, it is not the entire set. subset of \(\{ \mathrm{Graph}(n) : n \in \mathbb{N} \}\), where \(\mathrm{Graph}(n)\) is the graph of the partial function computed by \([n]\), the \(n\)th Turing machine. Then there is no Turing machine \([r]\) such that:

  • \([r](i)\) is \(1\) if \(\mathrm{Graph}(i) \in A\)

  • \([r](i)\) is \(0\) if \(\mathrm{Graph}(i) \not \in A\).

We give a proof that is (very nearly) constructive: one which (if we could be bothered to work it all through) gives us an explicit example noteWell, very nearly; see the next note. of a Turing machine whose “am-I-in-$A$” nature cannot be determined by a Turing machine. noteIt’s only “very nearly” constructive. It would be actually constructive if we knew in advance a specific example of a program whose function is in \(A\), and a program whose function is in \(B\). The proof here assumes the existence of a program of each type, but ironically the theorem itself guarantees that there is no fully-general way to find such programs.

We will present an intermediate lemma which does all the heavy lifting; this makes the actual reasoning rather unclear but very succinct, so we will also include an extensive worked example of what this lemma does for us.

Fixed point theorem

The intermediate lemma is a certain fixed-point theorem.

Let \(h: \mathbb{N} \to \mathbb{N}\) be total computable: that is, it halts on every input. Then there is \(n \in \mathbb{N}\) such that \(\mathrm{Graph}(n) = \mathrm{Graph}(h(n))\). noteAnd, moreover, we can actually find such an \(n\).

That is, the “underlying function” of \(n\) - the partial function computed by \([n]\) - has the same output, at every point, as the function computed by \([h(n)]\). If we view \(h\) as a way of manipulating a program (as specified by its description number), then this fixed-point theorem states that we can find a program whose underlying function is not changed at all by \(h\).

The proof of this lemma is quite simple once the magic steps have been discovered, but it is devilishly difficult to intuit, because it involves two rather strange and confusing recursions and some self-reference.

Recall the \(s_{mn}\) theorem, which states that there is a total computable function \(S\) of two variables \(m, n\) such that for every \(e \in \mathbb{N}\), we have \([e](m, n) = [S(e,m)](n)\): that is, there is a total computable way \(S\) of currying computable functions. (Strictly speaking, our Turing machines only take one argument. Therefore we should use a computable pairing scheme such as Cantor’s pairing function, so that actually \([e](m,n)\) should be interpreted as \([e](\mathrm{pair}(m, n))\).)

Then the function which takes the pair \((e, x)\) and outputs the value of \([ h(S(e,e)) ](x)\) is computable, so it has a description number \(a\), say. noteThis is the first strange part: we are treating \(e\) both as a description number, and as an input to \([e]\), when we consider \(S(e,e)\).

Now we claim that \(S(a, a)\) is the \(n\) we seek. noteThis is the second strange part, for the same reason as \(S(e,e)\) was the first; but this one is even worse, because the definition of \(a\) already involves a weird recursion and we’ve just added another one on top. Indeed, for any \(x\), \([n](x) = [S(a,a)](x)\) by definition of \(n\); this is \([a](a, x)\) by the \(s_{mn}\) theorem; this is \([h(S(a,a))](x)\) by definition of \([a]\); and that is \([h(n)](x)\) by definition of \(n\).

Therefore \([n](x) = [h(n)](x)\), so we have found our fixed point.

I’m not clever enough to check this properly, but someone should

Suppose our description numbering scheme is just “expand \(n\) as a number in base \(128\), and interpret the result as an ASCII <div><div>note:This is a standard, agreed-upon method of turning a number between \(0\) and \(128\) into a character.%% string; then interpret that string as [Python](https://​​​​wiki/​​Python_(programming_language)) code”.

Then our function \(h\), whatever it may be, can be viewed just as transforming Python code.

Suppose \(h\) does nothing more than insert the following line of code as the second line of its input:

x = 0

So, for instance, it takes the string

x = 1

and returns

x = 1
x = 0

thereby changing the function computed from “return the constant \(1\)” to “return the constant \(0\)”, in this case. Note that many other functions will not change at all: for example, those which don’t contain a variable \(x\) in the first place will be unchanged, because all the modification does is add in an initialisation of a variable which will never subsequently be used.

The fixed-point theorem guarantees that there is indeed a Python program which will not change at all under this modification (though in this case it’s very obvious). In fact the theorem constructs such a program; can we work out what it is?

First of all, \(S(m, n)\) can be implemented as follows. We will take our Python code to be written so that their input is given in the variable r1, so \([e](5)\) is simply the Python code represented by \(e\) but where the code-variable r1 is initialised to \(5\) first; that is, it can be found by prepending the line r1 = 5 to the code represented by \(e\). Then we will assume that Python comes with a function eval (corresponding to \(S\)) which takes as its input a string noteThe string is standing in place of \(m\), but we have just skipped the intermediate step of “unpack the integer into a string” and gone straight to assuming it is a string. and another argument with which eval initialises the variable x before running the string as a Python program in a separate instance of Python:

eval("print(r1)", 5)  # does nothing other than print the number 5
eval("print(y)", 5)  # throws an error because `y` is not defined when it comes to printing it
eval("print(6)", 5)  # prints 6, ignoring the fact that the variable `r1` is equal to `5` in the sub-instance

Remember, our proof of the fixed point theorem says that the program we want has code \(S(a, a)\), where \(a\) takes a pair \((e, x)\) as input, and outputs \([h(S(e,e))](x)\). What is \(a\) specifically here? Well, on the one hand we’re viewing it as a string of code (because it comes as the first argument to \(S\)), and on the other we’re viewing it as an integer (because it also comes as the second argument to \(S\)).

As code, a is the following string, where h is to be replaced by whatever we’ve already decided \(h\) is:

eval("r1 = e; h(eval(r1, str_as_int(r1)))", x)

We are assuming the existence of a function str_as_int which takes an ASCII string and returns the integer whose places in base 128 are the ASCII for each character of the string in turn.

For example, we have \(h\) inserting the line x = 0 as the second line, so our a is:

eval("r1 = e; x = 0; eval(r1, str_as_int(r1))", x)

As a number, a is just the ASCII for this, interpreted in base 128 (i.e. a certain number which in this case happens to have 106 digits, which is why we don’t give it here).

The claim of the fixed-point theorem, then, is that the following program is unchanged by \(h\):

eval("eval(\"r1 = e; x = 0; eval(r1, str_as_int(r1))\", x)", str_to_int("eval(\"r1 = e; x = 0; eval(r1, str_as_int(r1))\", x)"))

You may recognise this as a quining construction. %%

Deducing Rice’s theorem from the fixed point theorem

Finally, Rice’s theorem follows quickly: suppose we could decide in general whether \(\mathrm{Graph}(n) \in A\) or not, and label by \(\iota\) the computable function which decides this (that is, whose value is \(1\) if \(\mathrm{Graph}(n) \in A\), and \(0\) otherwise).

Since \(A\) is nonempty and proper, there are natural numbers \(a\) and \(b\) such that \(\mathrm{Graph}(a) \in A\) but \(\mathrm{Graph}(b) \not \in A\). Define the computable function \(g\) which takes \(n\) and outputs \(a\) if \(\iota(n) = 0\), and \(b\) otherwise. (That is, it flips its input: if its input had the property of \(A\), the function \(g\) outputs \(b\) whose graph is not in \(A\), and vice versa. Informally, it is the program-transformer that reads in a program, determines whether the program computes a function in \(A\) or not, and transforms the program into a specific canonical example of something which has the opposite \(A\)-ness status.)

By the fixed-point theorem, we can find \(n\) such that \(\mathrm{Graph}(n) = \mathrm{Graph}(g(n))\).

But now we can ask whether \(\mathrm{Graph}(n)\) is in \(A\) (and therefore whether \(\mathrm{Graph}(g(n))\) is in \(A\)).

  • If it is in \(A\), then \(g(n) = b\) and so \(\mathrm{Graph}(g(n)) = \mathrm{Graph}(b)\) which is not in \(A\).

  • If it is not in \(A\), then \(g(n) = a\) and so \(\mathrm{Graph}(g(n)) = \mathrm{Graph}(a)\) is in \(A\).

We have obtained contradictions in both cases (namely that \(\mathrm{Graph}(g(n))\) is both in \(A\) and not in \(A\)), so it must be the case that \(\iota\) does not exist after all.


  • Rice's Theorem

    Rice’s Theorem tells us that if we want to determine pretty much anything about the behaviour of an arbitrary computer program, we can’t in general do better than just running it.