To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The theoretical side of physical science holds up a mathematical mirror to nature. It seeks to find in the infinite variety of physical phenomena the few basic laws and relationships which underlie them. A secondary goal is the expression of these relations in efficient and transparent language.
After Newton had shown the power of this method, the eighteenth and nineteenth centuries saw its steady advance, hand-in-hand with experiment. At the end of the nineteenth century there was a crisis in physics – a widening gulf between theory and experiment – but, when Einstein emerged to resolve it, the new physics was still based on the old mathematics. It was simply used in surprising new ways. So it remains today, to a large extent, whatever educational theorists may tell us. Newton would not be greatly puzzled by the mathematics of Schrödinger's Equation.
On the other hand, the rapid development of computers is certainly changing our attitude to mathematics. This is obvious in the case of straightforward numerical calculations, but it extends also to the simulation of complex systems, the manipulation of algebra and even the proving of theorems. Applied mathematics is the art of the possible, and computers have widened its scope enormously. They are not just ‘number-crunchers’. Nor are they available only to specialists. Most students today enjoy access to a powerful computer system, and many are skilled programmers at an early age.
A vector field is a vector function of position. Such a function is used in physics to describe the force F on a particle, due to interaction with other particles, which contribute to the total field experienced by the particle according to such force laws as Coulomb's law. It is also basic to fluid dynamics in which the local velocity v of fluid flow is a function of position r.
In certain branches of physics and engineering the computation of vector fields offers the rewards and status of a career. For example aircraft designers need immense detail of air current/pressures and stress distributions of air frames and must compute and plot out vector fields of great complexity. General methods of systematic calculation are still the subject of mathematical research.
Let us look first at the case of a force field, using the particle-in-a-bowl problem of chapters 20 and 27 as our example. In the equation dv/dt = – ω2r, the right-hand side can be regarded as the vector function F(r) = – ω2r. This is the force that would act on a unit mass at r. The time variable is absent from this expression which describes a static field. The properties of a vector field can be studied without any immediate reference to particle motion.
To express the Coulomb field due to a point charge, fixed arbitrary axes may be chosen with the charge (positive, say) at the origin, x = y = z = 0.
Most physical laws express numerical relations between quantities which can be independently measured, such as the mass of a body, its acceleration and the force which is applied to it. Ultimately, they are established or refuted by experiment. The range of their validity is determined by the range of practicable experiments. Their generality is always in question, and physicists continually seek new insights in the breakdown of old theories.
It is important therefore to distinguish between physical laws, which are provisional and approximate (since, in principle, we expect to find circumstances in which they do not apply), and other mathematical relationships which are merely conventional definitions, such as ‘momentum equals mass times velocity’. These cannot be overturned, although there may be a time or a place in which they are not useful.
Given a problem to solve, we make the transition to mathematics by choosing appropriate physical laws and definitions. For the purposes of mathematical manipulation we may provisionally regard this formulation as exact, but in practice we will soon encounter uncertainties of two kinds.
First, if we wish to use experimentally measured quantities as numerical input to our calculations, as must ultimately be the case, we should recognise that every measurement involves some degree of uncertainty. The word ‘error’ is commonly used for this, which is unfortunate, because it need not be the result of any mistake or misjudgement, but may simply follow from the limited accuracy of the available measuring apparatus.
The latest VTOL (vertical take-off and landing) aircraft are described in the press as having ‘vectored thrust’. The jet can indeed be rotated relative to the aircraft so as to produce an orientated propelling force. In less newsworthy applications, physicists and engineers have been dealing with vectored thrust for a century or so, and in an immense variety of situations.
But it is not the only force that is vectored. The term vector may refer to any directed quantity, the most elementary instance being the geometrical displacement in the previous chapter. When dealing with vectors, ordinary numbers are called scalars. In physics all vector quantities are ultimately related to displacements, so it is both convenient and sufficient to single them out for discussion. The displacement (x, y, z) from the origin, or from any point, has a magnitude and a direction. Attending first to the x-direction (the axes, remember, are arbitrary, but have usually been chosen to simplify some geometry), then the displacement (1,0,0) is called the unit vector in the x-direction. It is denoted by the single bold-face symbol i. Positive or negative multiples of i can be added in any order to give a larger or smaller displacement along the x-axis. The rule is: multiplying a vector by a scalar changes its magnitude by the same factor and multiplying by – 1 reverses its direction (see fig. 4.1).
Physical scientists use a lot of common sense and only a little statistics in dealing with errors. Only in special areas (such as the work of Standards Laboratories) are errors relentlessly pursued with full rigour. Usually, it is enough to be satisfied that they are negligible or insignificant for the purposes at hand. As with safety regulations, one tries to ensure a large ‘margin of error’ wherever possible. This contrasts with the state of affairs in biology and in social sciences. These deal in quantities, such as the frequency of the human heartbeat, so widely variable that errors must be treated with caution and proper statistical methodology. But even common sense needs a little mathematical background, such as is given here.
First, we may distinguish between random and systematic errors. A systematic error is one which is consistent from one measurement to the next. It might arise from inaccurate adjustment of instruments, a faulty calibration, the ineptitude of the scientist himself, or simply from a failure to recognise some influence upon the data which was not the object of the experiment. We can try to identify such errors and either eliminate them or add corrections to allow for them, whereupon they no longer concern us. It is not usual to include systematic errors in an error estimate since, if we can identify them, they can be removed!
The heart of most qualitative description of physical systems is the mathematical function, expressing the dependence of one variable quantity upon another. The named (or ‘special’) functions dealt with so far – the exponential, logarithm, cosine, and related functions – constitute a basic set of tools for describing functions. Their use can be extended enormously by combining them in various ways, so that in principle there is very little functional behaviour that they cannot represent. But for many purposes it is more convenient to specify additional functions. Often these are solutions of differential equations (chapters 18–20). By such means, further important named functions are introduced into the mathematical vocabulary, such as Bessel, Legendre, hypergeometric, and so on. The definition and systematisation of a whole class of named and related functions is often pursued through their representation by Taylor (or MacLaurin) series.
But such power series have more direct practical uses for the physicist, namely to approximate functions in numerical calculations when the linear approximation of the last chapter becomes inadequate but only a few additional powers are needed. Sometimes experimental data is confined to such a range and the goal of measurement is to find the first few coefficients of a power series.
Throughout this book references have been made to results ‘which are derived from the theory of complex variables’. This theory thus becomes an integral part of the mathematics appropriate to physical applications. The difficulty with it, from the point of view of a book such as the present one, is that, although the applications for which it is needed are very real and applied, the underlying basis of complex variable theory has a distinctly pure mathematics flavour.
To adopt this more rigorous approach correctly would involve developing a large amount of groundwork in analysis, for example, precise definitions of continuity and differentiability, the theory of sets and a detailed study of boundedness. It has been decided not to do so here, but rather to pursue only those parts of the formal theory which are needed to establish the results used elsewhere in this book and some others of general utility. Specifically, the subjects treated are,
(i) complex potentials for two-dimensional potential problems,
(ii) location of zeros of a function, in particular a polynomial,
(iii) summation of series and evaluation of integrals,
(iv) the inverse Laplace transform integral.
In this spirit, the proofs that are adopted for some of the standard results of complex variable theory have been chosen with an eye to simplicity rather than sophistication. This means that in some cases the imposed conditions are more stringent than would be strictly necessary if more sophisticated proofs were used; where this happens the less restrictive results are usually stated as well.
It will undoubtedly have been observed by the reader who has only a moderate familiarity with the mathematical methods used for physical problems, that harmonic waves (of the form exp (iωt) or cos ωt) are very convenient functions to deal with. It is straightforward to differentiate, integrate and multiply them; their moduli are easily taken, and each contains only one frequency [or wavenumber, for forms like exp (ikx), using an obvious notation]. This last point is important since the response of many physical systems, such as an electronic circuit or a prism, depends most directly on the frequency content of the input the system receives.
Even if we were not familiar with the results of the Sturm–Liouville theory discussed in the previous chapter, these properties by themselves would indicate that it may be advantageous in some cases to express all the functions involved in a problem as superpositions of harmonic wave functions (Fourier series or transforms). The otherwise difficult parts of the problem might then be carried through more simply, and finally, if necessary, the output functions reconstituted from the ‘processed’ waves.
In fact, we recognize the harmonic wave y(x) = exp (ikx) as an eigenfunction of the simplest non-trivial Sturm-Liouville equation, with p = 1, q = 0, ρ = 1 and λ = k2, and thus, provided that, we may apply the general results of chapter 7. This boundary condition is clearly going to be satisfied if we consider periodic problems of period b – a, and so we are led to Fourier series, or if a → - ∞ and b → ∞, to Fourier transforms.