To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We return now to the study of URM-computable functions. Henceforth the term computable standing alone means URM-computable, and program means URM program.
The key fact established in this chapter is that the set of all programs is effectively denumerable1: in other words there is an effective coding of programs by the set of all natural numbers. Among other things, it follows that the class ℘ is denumerable, which implies that there are many functions that are not computable. In § 3 we discuss Cantor's diagonal method, whereby this is established.
The numbering or coding of programs, and particularly its effectiveness, is absolutely fundamental to the development of the theory of computability. We cannot overemphasise its importance. From it we obtain codes or indices for computable functions, and this means that we are able to pursue the idea of effective operations involving such codes.
In § 4 we prove the first of two important theorems involving codes of functions: the so-called s–m–n theorem of Kleene. (The second theorem is the main result of chapter 5.)
Numbering programs
We first explain the terminology that we shall use.
Definitions
(a) A set X is denumerable if there is a bijection f: X → ℕ. (Note. The term countable is normally used to mean finite or denumerable; thus, for infinite sets, countable means the same as denumerable. The term countably infinite is used by some authors instead of denumerable.)
(b) An enumeration of a set X is a surjection g: ℕ → X; this is often represented by writing
X = {x0, x1, x2, …}
where xn = g(n). This is an enumeration without repetitions if g is injective.
(c) Let x be a set of finite objects (for example a set of integers, or a set of instructions, or a set of programs); then X is effectively denumerable if there is a bijection f : X → ℕ such that both f and f-l are effectively computable functions.
In this chapter we shall see that various methods of combining computable functions give rise to other computable functions. This will enable us to show quite rapidly that many commonly occurring functions are computable, without writing a program each time – a task that would be rather laborious and tedious.
The basic functions
First we note that some particularly simple functions are computable; from these basic functions (defined in lemma 1.1 below) we shall then build more complicated computable functions using the techniques developed in subsequent sections.
Lemma
The following basic functions are computable:
(a) the zero functionO(O(x) = 0 for all x);
(b) the successor function x + 1
(c) for each n ≥ 1 and 1 ≤ i ≤ n, the projection function given by
Proof. These functions correspond to the arithmetic instructions for the URM. Specifically, programs are as follows:
(a) 0: program Z(l);
(b) x + l: program S(l);
(c) program T(i, 1).
Joining programs together
In each of §§ 3-5 below we need to write programs that incorporate other programs as subprograms or subroutines. In this section we deal with some technical matters so as to make the program writing of later sections as straightforward as possible.
A simple example of program building is when we have programs P and Q, and we wish to write a program for the composite procedure: first do P, and then do Q. Our instinct is to simply write down the instructions in P followed by the instructions in Q. But there are two technical points to consider.
Suppose that P = I1,I2,…, Is. A computation under P is completed when the ‘next instruction for the computation’ is Iv for some v > s; we then require the computation under our composite program to proceed to the first instruction of Q.
The only prerequisite to be able to read this book is familiarity with the basic notations of sets and functions, and the basic ideas of mathematical reasoning. Here we shall review these matters, and explain the notation and terminology that we shall use. This is mostly standard; so for the reader who prefers to move straight to chapter 1 and refer back to this prologue only as necessary, we point out that we shall use the word function to mean a partial function in general. We discuss this more fully below.
Sets
Generally we shall use capital letters A, B, C,… to denote sets. We write x ∈ A to mean that x is a member of A, and we write x ∉ A to mean that x is not a member of A. The notation {x: … x…} where … x … is some statement involving x means the set of all objects x for which … x … is true. Thus {x : x is an even natural number} is the set {0,2,4,6,…}.
If A, B are sets, we write A ⊆ B to mean that A is contained in B (or A is a subset of B); we use the notation A ⊂ B to mean that A ⊆ B but A ≠ B (i.e. A is a proper subset of B). The union of the sets A, B is the set {x : x ∈ A or x ∈ B (or both)}, and is denoted by A ∪ B; the intersection of A, B is the set {x: x ∈ A and x ∈ B} and is denoted by A ∩ B. The difference (or relative complement) of the sets A, B is the set {x : x ∈ A and x ∉ B} and is denoted by A \B.
In the real world of computing, the critical question about a function f is not Is f computable?, but rather Is f computable in practical terms? In other words, Is there a program for f that will compute f in the time (or space) we have available? The answer depends partly on our skill in writing programs and the sophistication of our computers; but intuitively we feel that there is an additional factor which can be described as the ‘intrinsic complexity’ of the function f itself. The theory of computational complexity, which we introduce in this chapter, has been developed in order to be able to discuss such questions and to aid the study of the more practical aspects of computability.
Using the URM approach, we can measure the time taken to compute each value of a function f by a particular program, on the assumption that each step of a URM computation is performed in unit time. The time of computation thus defined is an example of a computational complexity measure that reflects the complexity or efficiency of the program being used. (Later we shall mention other complexity measures.)
With a notion of complexity of computation made precise, it is possible to pursue questions such as How intrinsically complex is a computable function f? and Is it possible to find a ‘best’ program for computing f?
The theory of computational complexity is a relatively new field of research; we shall present a small sample of results that have a bearing on the questions raised above. At the end of the chapter we shall provide suggestions for the reader wishing to pursue this topic further.
We begin in § 1 by defining some notation; after some discussion we proceed to show that there are arbitrarily complex computable functions. Section 2 is devoted to the surprising and curious Speed-up theorem of M. Blum, which shows in particular that there are computable functions having no ‘best’ program.
The emergence of the concept of a computable function over fifty years ago marked the birth of a new branch of mathematics: its importance may be judged from the fact that it has had applications and implications in fields as diverse as computer science, philosophy and the foundations of mathematics, as well as in many other areas of mathematics itself. This book is designed to be an introduction to the basic ideas and results of computability theory (or recursion theory, as it is traditionally known among mathematicians).
The initial purpose of computability theory is to make precise the intuitive idea of a computable function; that is, a function whose values can be calculated in some kind of automatic or effective way. Thereby we can gain a clearer understanding of this intuitive idea; and only thereby can we begin to explore in a mathematical way the concept of computability as well as the many related ideas such as decidability and effective enumerability. A rich theory then arises, having both positive and negative aspects (here we are thinking of non-computability and undecidability results), which it is the aim of this book to introduce.
We could describe computability theory, from the viewpoint of computer science, as beginning with the question What can computers do in principle (without restrictions of space, time or money)? – and, by implication – What are their inherent theoretical limitations? Thus this book is not about real computers and their hardware, nor is it about programming languages and techniques. Nevertheless, our subject matter is part of the theoretical background to the real world of computers and their use, and should be of interest to the computing community.
For the basic definition of computability we have used the ‘idealised computer’ or register machine approach; we have found that this is readily grasped by students, most of whom are aware of the idea of a computer.
Our basic study of computability has been designed so that it could serve as a stepping stone to more advanced or more detailed study in any of several directions. In this brief postlude, we shall mention some of the areas in which further study could be pursued, and we offer some suggestions for further reading. The divisions below are not hard and fast, and there are many interrelations between the various areas we mention.
Computability Further study of the theoretical notion of computability (the starting point of this book) could be pursued in two directions: (a) more detailed examination of other equivalent approaches to computability (which we surveyed in chapter 3); (b) examination of more restricted notions of effective computability, involving, for instance, finite automata and similar devices.
Some references (several historical) for (a) were given in chapter 3. For both (a) and (b) we suggest the books of Minsky [1967] (a very comprehensive treatment), Arbib [1969], or Engeler [1973].
Recursion theory We use this traditional title under which to mention more advanced ideas arising out of the notion of computability on ℕ, such as we began to pursue in chapters 7, and 9 to 11. Specific areas include:
Hierarchies: there are various ways to extend the sequence beginning ‘recursive, r.e., …’ to obtain a hierarchy of kinds of set, each kind of set having more difficult decision problem than the preceding one. Among the important hierarchies that have been studied are the arithmetic hierarchy, the hyperarithmetic hierarchy, and the analytical hierarchy.
Reducibilities and degrees: between ≤m and ≤T there is a spectrum of reducibilities that could be investigated. For the student wishing to delve further into Turing reducibility, the next step would be to master a proof of the Friedberg-Muchnik solution to Post's problem, before proceeding to further results and proofs in this area, some of which we mentioned in chapter 9.