To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Every measurement is in fact a random sample from a probability distribution. In order to make a judgment on the accuracy of an experimental result we must know something about the underlying probability distribution. This chapter treats the properties of probability distributions and gives details about the most common distributions. The most important distribution of all is the normal distribution, not in the least because the central limit theorem tells us that it is the limiting distribution for the sum of many random disturbances.
Introduction
Every measurement xi of a quantity x can be considered to be a random sample from a probability distribution p(x) of x. In order to be able to analyze random deviations in measured quantities we must know something about the underlying probability distribution, from which the measurement is supposed to be a random sample.
If x can only assume discrete values x = k, k = 1, …, n then p(k) forms a discrete probability distribution and p(k) (often called the probability mass function, pmf) indicates the probability that an arbitrary sample has the value k. If x is a continuous variable, then p(x) is a continuous function of x: the probability density function, pdf. The meaning of p(x) is: the probability that a sample xi occurs in the interval (x, x + dx) equals p(x) dx.
Often you perform a series of experiments in which you vary an independent variable, such as temperature. What you are really interested in is the relation between the measured values and the independent variables, but the trouble is that your experimental values contain statistical deviations. You may already have a theory about the form of this relation and use the experiment to derive the still unknown parameters. It can also happen that the experiment is used to validate the theory or to decide on a modification. In this chapter a global view is taken and functional relations are qualitatively evaluated using simple graphical presentations of the experimental data. The trick of transforming functional relations to a linear form allows quick graphical interpretations. Even the inaccuracies of the parameters can be graphically estimated. If you want accurate results, then skip to the next chapter.
Introduction
In the previous chapter you have learned how to handle a series of equivalent measurements that should have produced equal results if there had been no random deviations in the measured data. Very commonly, however, a quantity yi is measured as a function f(xi) of an independent variable xi such as time, temperature, distance, concentration or bin number. The measured quantity may also be a function of several such variables. Usually the independent variables – which are under the control of the experimenter – are known with high accuracy and the dependent variables – the measured values – are subject to random errors.
It is normal practice when starting the mathematical investigation of a physical problem to assign algebraic symbols to the quantity or quantities whose values are sought, either numerically or as explicit algebraic expressions. For the sake of definiteness, in this chapter, our discussion will be in terms of a single quantity, which we will denote by x most of the time. The extension to two or more quantities is straightforward in principle, but usually entails much longer calculations, or a significant increase in complexity when graphical methods are used.
Once the sought-for quantity x has been identified and named, subsequent steps in the analysis involve applying a combination of known laws, consistency conditions and (possibly) given constraints to derive one or more equations satisfied by x. These equations may take many forms, ranging from a simple polynomial equation to, say, a partial differential equation with several boundary conditions. Some of the more complicated possibilities are treated in the later chapters of this book, but for the present we will be concerned with techniques for the solution of relatively straightforward algebraic equations.
When algebraic equations are to be solved, it is nearly always useful to be able to make plots showing how the functions, fi(x), involved in the problem change as their argument x is varied; here i is simply a label that identifies which particular function is being considered.
Ninety percent of all physics is concerned with vibrations and waves of one sort or another. The same basic thread runs through most branches of physical science, from acoustics through engineering, fluid mechanics, optics, electromagnetic theory and X-rays to quantum mechanics and information theory. It is closely bound to the idea of a signal and its spectrum. To take a simple example: imagine an experiment in which a musician plays a steady note on a trumpet or a violin, and a microphone produces a voltage proportional to the instantaneous air pressure. An oscilloscope will display a graph of pressure against time, F(t), which is periodic. The reciprocal of the period is the frequency of the note, 440 Hz, say, for a well-tempered middle A – the tuning-up frequency for an orchestra.
The waveform is not a pure sinusoid, and it would be boring and colourless if it were. It contains ‘harmonics’ or ‘overtones’: multiples of the fundamental frequency, with various amplitudes and in various phases, depending on the timbre of the note, the type of instrument being played and on the player. The waveform can be analysed to find the amplitudes of the overtones, and a list can be made of the amplitudes and phases of the sinusoids which it comprises. Alternatively a graph, A(ν), can be plotted (the sound-spectrum) of the amplitudes against frequency (Fig. 1.1).
In Chapter 6 we discussed how complicated functions f(x) may be expressed as power series. Although they were not presented as such, the power series could all be viewed as linear superpositions of the monomial basic set of functions, namely 1, x, x2, x3, … xn, … Natural though this set may seem, they are in many ways far from ideal: for example they possess no mutual orthogonality properties, a characteristic that is generally of great value when it comes to determining, for any particular function, the multiplying constant for each basic function in the sum. Moreover, this particular set of basic functions can only be used to represent continuous functions.
In the case of original functions f(t) that are periodic, some improvement on this situation can be made by using, as the basic set, sine and cosine functions. For a function with period T, say, the set of sine and cosine functions with arguments 2πnt/T, for all n ≥ 0, form a suitable basic set for expressing f as a series; such a representation is called a Fourier series. One great advantage they possess over the monomial functions is that they are mutually orthogonal when integrated over any continuous period of length T, i.e. the integral from t0 to t0 + T of the product of any sine and any cosine, or of two sines or cosines with different values of n, is equal to zero.
This and the next chapter are concerned with the formalism of probably the most widely used mathematical technique in the physical sciences, namely the calculus. The current chapter deals with the process of differentiation whilst Chapter 4 is concerned with its inverse process, integration. The topics covered are essential for the remainder of the book; once studied, the contents of the two chapters serve as reference material, should that be needed. Readers who have had previous experience of differentiation and integration should ensure full familiarity by looking at the worked examples in the main text and by attempting the problems at the ends of the two chapters.
Also included in this chapter is a section on curve sketching. Most of the mathematics needed as background to this important skill for applied physical scientists was covered in the first two chapters, but delaying our main discussion of it until the end of this chapter allows the location and characterisation of turning points to be included amongst the techniques available.
Differentiation
Differentiation is the process of determining how quickly or slowly a function varies, as the quantity on which it depends, its argument, is changed. More specifically, it is the procedure for obtaining an expression (numerical or algebraic) for the rate of change of the function with respect to its argument.