To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There are several reasons for studying approximation theory and methods, ranging from a need to represent functions in computer calculations to an interest in the mathematics of the subject. Although approximation algorithms are used throughout the sciences and in many industrial and commercial fields, some of the theory has become highly specialized and abstract. Work in numerical analysis and in mathematical software is one of the main links between these two extremes, for its purpose is to provide computer users with efficient programs for general approximation calculations, in order that useful advances in the subject can be applied. This book presents the view of a numerical analyst, who enjoys the theory, and who is keenly interested in its importance to practical computer calculations. It is based on a course of twenty-four lectures, given to third-year mathematics undergraduates at the University of Cambridge. There is really far too much material for such a course, but it is possible to speak coherently on each chapter for about one hour, and to include proofs of most of the main theorems. The prerequisites are an introduction to linear spaces and operators and an intermediate course on analysis, but complex variable theory is not required.
Spline functions have transformed approximation techniques and theory during the last fifteen years. Not only are they convenient and suitable for computer calculations, but also they provide optimal theoretical solutions to the estimation of functions from limited data. Therefore seven chapters are given to spline approximations. The classical theory of best approximations from linear spaces with respect to the minimax, least squares and L1-norms is also studied, and algorithms are described and analysed for the calculation of these approximations. Interpolation is considered also, and the accuracy of interpolation and other linear operators is related to the accuracy of optimal algorithms. Special attention is given to polynomial functions, and there is one chapter on rational functions, but, due to the constraints of twenty-four lectures, the approximation of functions of several variables is not included.
Many excellent books are published on approximation theory and methods. The general texts that are particularly valuable to the present work are the ones by Achieser [2], Cheney [35], Davis [50], Handscomb (ed.) [74], Hayes (ed.) [77], Hildebrand [78], Holland & Sahney [81], Lorentz [100], Rice [132] and [134], Rivlin [138] and Watson [161]. Detailed references and suggestions for further reading are given in this appendix.
Most of the theory in Chapter 1 is taken from Cheney [35] and from Rice [132]. If one prefers an introduction to approximation theory that shows the relations to functional analysis, then the paper by Buck [32] is recommended. We give further attention only in special cases to the interesting problem, mentioned at the end of Section 1.1, of investigating how well any member of B can be approximated from A; a more general study of this problem is in Lorentz [100] and in Vitushkin [160]. The development of the Polya algorithm, which is the subject of Exercise 1.10, into a useful computational procedure is considered by Fletcher, Grant & Hebden [57].
In Chapter 2, as in Chapter 1, much of the basic theory is taken from Cheney [35]. For a further study of convexity the book by Rockafellar [142] is recommended. Several excellent examples of the non-uniqueness of best approximation with respect to the 1- and the ∞-norms are given by Watson [161]. An interesting case of Exercise 2.1, namely when B is the space Rn and the unit ball {f:∥f∥≤1; f∈R} is a polyhedron, is considered by Anderson & Osborne [5].
The point of view in Chapter 3 that approximation algorithms can be regarded as operators is treated well by Cheney [35], and more advanced work on this subject can be found in Cheney & Price [37]. Several references to applications of Theorem 3.1 are given later, including properties of polynomial approximation operators that are defined by interpolation conditions.
Over the past fifty years there have been many proposals for a precise mathematical characterisation of the intuitive idea of effective computability. The URM approach is one of the more recent of these. In this chapter we pause in our investigation of URM-computability itself to consider two related questions.
How do the many different approaches to the characterisation of computability compare with each other, and in particular with URM-computability?
How well do these approaches (particularly the URM approach) characterise the informal idea of effective computability?
The first question will be discussed in §§ 1-6; the second will be taken up in § 7. The reader interested only in the technical development of the theory in this book may omit §§ 3-6; none of the development in later chapters depends on these sections.
Other approaches to computability
The following are some of the alternative characterisations that have been proposed:
(a) Gödel-Herbrand-Kleene (1936). General recursive functions defined by means of an equation calculus. (Kleene [1952], Mendelson [1964].)
(b) Church (1936). λ-definable functions. (Church [1936] or [1941].)
(c) Gödel-Kleene (1936). μrecursive functions and partial recursive functions (§ 2 of this chapter.).
(d) Turing (1936). Functions computable by finite machines known as Turing machines. (Turing [1936]; § 4 of this chapter.)
(e) Post (1943). Functions defined from canonical deduction systems. (Post [1943], Minsky [1967]; § 5 of this chapter.)
(f) Markov (1951). Functions given by certain algorithms over a finite alphabet. (Markov [1954], Mendelson [1964]; § 5 of this chapter.)
There is great diversity among these various approaches; each has its own rationale for being considered a plausible characterisation of computability. The remarkable result of investigation by many researchers is the following:
The Fundamental result
Each of the above proposals for a characterisation of the notion of effective computability gives rise to the same class of functions, the class that we have denoted ℘.