To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we turn to the study of statistics, which is concerned with the analysis of experimental data. In a book of this nature we cannot hope to do justice to such a large subject; indeed, many would argue that statistics belongs to the realm of experimental science rather than in a mathematics textbook. Nevertheless, physical scientists and engineers are regularly called upon to perform a statistical analysis of their data and to present their results in a statistical context. Therefore, we will concentrate on this aspect of a much more extensive subject.
Experiments, samples and populations
We may regard the product of any experiment as a set of N measurements of some quantity x or set of quantities x, y, …, z. This set of measurements constitutes the data. Each measurement (or data item) consists accordingly of a single number xi or a set of numbers (xi, yi, …, zi), where i = 1, …, N. For the moment, we will assume that each data item is a single number, although our discussion can be extended to the more general case.
As a result of inaccuracies in the measurement process, or because of intrinsic variability in the quantity x being measured, one would expect the N measured values x1, x2, …, xN to be different each time the experiment is performed.
In this chapter and the next the solution of differential equations of types typically encountered in the physical sciences and engineering is extended to situations involving more than one independent variable. A partial differential equation (PDE) is an equation relating an unknown function (the dependent variable) of two or more variables to its partial derivatives with respect to those variables. The most commonly occurring independent variables are those describing position and time, and so we will couch our discussion and examples in notation appropriate to them.
As in other chapters we will focus our attention on the equations that arise most often in physical situations. We will restrict our discussion, therefore, to linear PDEs, i.e. those of first degree in the dependent variable. Furthermore, we will discuss primarily second-order equations. The solution of first-order PDEs will necessarily be involved in treating these, and some of the methods discussed can be extended without difficulty to third- and higher-order equations. We shall also see that many ideas developed for ordinary differential equations (ODEs) can be carried over directly into the study of PDEs.
In this chapter we will concentrate on general solutions of PDEs in terms of arbitrary functions and the particular solutions that may be derived from them in the presence of boundary conditions. We also discuss the existence and uniqueness of the solutions to PDEs under given boundary conditions.
In the next chapter the methods most commonly used in practice for obtaining solutions to PDEs subject to given boundary conditions will be considered.
A knowledge of mathematical methods is important for an increasing number of university and college courses, particularly in physics, engineering and chemistry, but also in more general science. Students embarking on such courses come from diverse mathematical backgrounds, and their core knowledge varies considerably. We have therefore decided to write a textbook that assumes knowledge only of material that can be expected to be familiar to all the current generation of students starting physical science courses at university. In the United Kingdom this corresponds to the standard of Mathematics A-level, whereas in the United States the material assumed is that which would normally be covered at junior college.
Starting from this level, the first six chapters cover a collection of topics with which the reader may already be familiar, but which are here extended and applied to typical problems encountered by first-year university students. They are aimed at providing a common base of general techniques used in the development of the remaining chapters. Students who have had additional preparation, such as Further Mathematics at A-level, will find much of this material straightforward.
Following these opening chapters, the remainder of the book is intended to cover at least that mathematical material which an undergraduate in the physical sciences might encounter up to the end of his or her course. The book is also appropriate for those beginning graduate study with a mathematical content, and naturally much of the material forms parts of courses for mathematics students.
Throughout this book references have been made to results derived from the theory of complex variables. This theory thus becomes an integral part of the mathematics appropriate to physical applications. Indeed, so numerous and widespread are these applications that the whole of the next chapter is devoted to a systematic presentation of some of the more important ones. This current chapter develops the general theory on which these applications are based. The difficulty with it, from the point of view of a book such as the present one, is that the underlying basis has a distinctly pure mathematics flavour.
Thus, to adopt a comprehensive rigorous approach would involve a large amount of groundwork in analysis, for example formulating precise definitions of continuity and differentiability, developing the theory of sets and making a detailed study of boundedness. Instead, we will be selective and pursue only those parts of the formal theory that are needed to establish the results used in the next chapter and elsewhere in this book.
In this spirit, the proofs that have been adopted for some of the standard results of complex variable theory have been chosen with an eye to simplicity rather than sophistication. This means that in some cases the imposed conditions are more stringent than would be strictly necessary if more sophisticated proofs were used; where this happens the less restrictive results are usually stated as well. The reader who is interested in a fuller treatment should consult one of the many excellent textbooks on this fascinating subject.
This chapter introduces space vectors and their manipulation. Firstly we deal with the description and algebra of vectors, then we consider how vectors may be used to describe lines and planes and finally we look at the practical use of vectors in finding distances. Much use of vectors will be made in subsequent chapters; this chapter gives only some basic rules.
Scalars and vectors
The simplest kind of physical quantity is one that can be completely specified by its magnitude, a single number, together with the units in which it is measured. Such a quantity is called a scalar and examples include temperature, time and density.
A vector is a quantity that requires both a magnitude (≥ 0) and a direction in space to specify it completely; we may think of it as an arrow in space. A familiar example is force, which has a magnitude (strength) measured in newtons and a direction of application. The large number of vectors that are used to describe the physical world include velocity, displacement, momentum and electric field. Vectors are also used to describe quantities such as angular momentum and surface elements (a surface element has an area and a direction defined by the normal to its tangent plane); in such cases their definitions may seem somewhat arbitrary (though in fact they are standard) and not as physically intuitive as for vectors such as force.
As indicated at the start of the previous chapter, significant conclusions can often be drawn about a physical system simply from the study of its symmetry properties. That chapter was devoted to setting up a formal mathematical basis, group theory, with which to describe and classify such properties; the current chapter shows how to implement the consequences of the resulting classifications and obtain concrete physical conclusions about the system under study. The connection between the two chapters is akin to that between working with coordinate-free vectors, each denoted by a single symbol, and working with a coordinate system in which the same vectors are expressed in terms of components.
The ‘coordinate systems’ that we will choose will be ones that are expressed in terms of matrices; it will be clear that ordinary numbers would not be sufficient, as they make no provision for any non-commutation amongst the elements of a group. Thus, in this chapter the group elements will be represented by matrices that have the same commutation relations as the members of the group, whatever the group's original nature (symmetry operations, functional forms, matrices, permutations, etc.). For some abstract groups it is difficult to give a written description of the elements and their properties without recourse to such representations. Most of our applications will be concerned with representations of the groups that consist of the symmetry operations on molecules containing two or more identical atoms.
In chapter 7 we discussed the algebra of vectors, and in chapter 8 we considered how to transform one vector into another using a linear operator. In this chapter and the next we discuss the calculus of vectors, i.e. the differentiation and integration both of vectors describing particular bodies, such as the velocity of a particle, and of vector fields, in which a vector is defined as a function of the coordinates throughout some volume (one-, two- or three-dimensional). Since the aim of this chapter is to develop methods for handling multi-dimensional physical situations, we will assume throughout that the functions with which we have to deal have sufficiently amenable mathematical properties, in particular that they are continuous and differentiable.
Differentiation of vectors
Let us consider a vector a that is a function of a scalar variable u. By this we mean that with each value of u we associate a vector a(u). For example, in Cartesian coordinates a(u) = ax(u)i + ay(u)j + az(u)k, where ax(u), ay(u) and az(u) are scalar functions of u and are the components of the vector a(u) in the x-, y- and z- directions respectively. We note that if a(u) is continuous at some point u = u0 then this implies that each of the Cartesian components ax(u), ay(u) and az(u) is also continuous there.
In chapter 24, we developed the basic theory of the functions of a complex variable, z = x + iy, studied their analyticity (differentiability) properties and derived a number of results concerned with values of contour integrals in the complex plane. In this current chapter we will show how some of those results and properties can be exploited to tackle problems arising directly from physical situations or from apparently unrelated parts of mathematics.
In the former category will be the use of the differential properties of the real and imaginary parts of a function of a complex variable to solve problems involving Laplace's equation in two dimensions, whilst an example of the latter might be the summation of certain types of infinite series. Other applications, such as the Bromwich inversion formula for Laplace transforms, appear as mathematical problems that have their origins in physical applications; the Bromwich inversion enables us to extract the spatial or temporal response of a system to an initial input from the representation of that response in ‘frequency space’ – or, more correctly, imaginary frequency space.
Other topics that will be considered are the location of the (complex) zeros of a polynomial, the approximate evaluation of certain types of contour integrals using the methods of steepest descent and stationary phase, and the so-called ‘phase-integral’ solutions to some differential equations. For each of these a brief introduction is given at the start of the relevant section and to repeat them here would be pointless. We will therefore move on to our first topic of complex potentials.
In chapter 2, we discussed functions f of only one variable x, which were usually written f(x). Certain constants and parameters may also have appeared in the definition of f, e.g. f(x) = ax+2 contains the constant 2 and the parameter a, but only x was considered as a variable and only the derivatives f(n)(x) = dnf/dxn were defined.
However, we may equally well consider functions that depend on more than one variable, e.g. the function f(x, y) = x2 + 3xy, which depends on the two variables x and y. For any pair of values x, y, the function f(x, y) has a well-defined value, e.g. f(2, 3) = 22. This notion can clearly be extended to functions dependent on more than two variables. For the n-variable case, we write f(x1, x2, …, xn) for a function that depends on the variables x1, x2, …, xn. When n = 2, x1 and x2 correspond to the variables x and y used above.
Functions of one variable, like f(x), can be represented by a graph on a plane sheet of paper, and it is apparent that functions of two variables can, with little effort, be represented by a surface in three-dimensional space. Thus, we may also picture f(x, y) as describing the variation of height with position in a mountainous landscape. Functions of many variables, however, are usually very difficult to visualise and so the preliminary discussion in this chapter will concentrate on functions of just two variables.