To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In 1827, a scientist named Robert Brown used a microscope to observe the motion of tiny pollen grains suspended in water. These tiny grains jiggle about with a small but rapid and erratic motion, and this motion has since become known as Brownian motion in recognition of Brown's pioneering work. Einstein was the first to provide a theoretical treatment of this motion (in 1905), and in 1908 the French physicist Paul Langevin provided an alternative approach to the problem. The approach we will take here is very similar to Langevin's, although a little more sophisticated, since the subject of Ito calculus was not developed until the 1940s.
The analysis is motivated by the realization that the erratic motion of the pollen grains comes from the fact that the liquid is composed of lumps of matter (molecules), and that these are bumping around randomly and colliding with the pollen grain. Because the motion of the molecules is random, the net force on the pollen grain fluctuates in both size and direction depending on how many molecules hit it at any instant, and whether there are more impacts on one side of it or another.
To describe Brownian motion we will assume that there is a rapidly fluctuating force on the molecule, and that the fluctuations of this force are effectively white noise.
So far we have studied stochastic differential equations driven by two fundamentally different noise processes, the Wiener process and the Poisson process. The sample paths of the Wiener process are continuous, while those of the Poisson process are discontinuous. The sample paths of a stochastic process x(t) are continuous if its increments, dx, are infinitesimal, meaning that dx → 0 as dt → 0. The Wiener and Poisson processes have two properties in common. The first is that the probability densities for their respective increments do not change with time, and the second is that their increments at any given time are independent of their increments at all other times. The increments of both processes are thus mutually independent and identically distributed, or i.i.d. for short. In Section 3.3, we discussed why natural noise processes that approximate continuous i.i.d. processes are usually Gaussian, and that this is the result of the central limit theorem. In this chapter we consider all possible i.i.d. noise processes. These are the Levy processes, and include not only the Gaussian and Poisson (jump) processes that we have studied so far, but processes with continuous sample paths that do not obey the central limit theorem.
There are three conditions that define the class of Levy processes. As mentioned above, the infinitesimal increments, dL, for a given Levy process, L(t), are all mutually independent and all have the same probability density.
This book is intended for a one-semester graduate course on stochastic methods. It is specifically targeted at students and researchers who wish to understand and apply stochastic methods to problems in the natural sciences, and to do so without learning the technical details of measure theory. For those who want to familiarize themselves with the concepts and jargon of the “modern” measure-theoretic formulation of probability theory, these are described in the final chapter. The purpose of this final chapter is to provide the interested reader with the jargon necessary to read research articles that use the modern formalism. This can be useful even if one does not require this formalism in one's own research.
This book contains more material than I cover in my current graduate class on the subject at UMass Boston. One can select from the text various optional paths depending on the purpose of the class. For a graduate class for physics students who will be using stochastic methods in their research work, whether in physics or interdisciplinary applications, I would suggest the following: Chapters 1, 2, 3 (with Section 3.8.5 optional), 4 (with Section 4.2 optional, as alternative methods are given in 7.7), 5 (with Section 5.2 optional), 7 (with Sections 7.8 and 7.9 optional), and 8 (with Section 8.9 optional). In the above outline I have left out Chapters 6, 9 and 10.
We have seen in the previous chapter how to define a stochastic process using a sequence of Gaussian infinitesimal increments dW, and how to obtain new stochastic processes as the solutions to stochastic differential equations driven by this Gaussian noise. We have seen that a stochastic process is a random variable x(t) at each time t, and we have calculated its probability density, P(x, t), average 〈x(t)〉 and variance V[x(t)]. In this chapter we will discuss and calculate some further properties of a stochastic process, in particular its sample paths, two-time correlation function, and power spectral density (or power spectrum). We also discuss the fact that Wiener noise is white noise.
Sample paths
A sample path of the Wiener process is a particular choice (or realization) of each of the increments dW. Since each increment is infinitesimal, we cannot plot a sample path with infinite accuracy, but must choose some time discretization Δt, and plot W(t) at the points nΔt. Note that in doing so, even though we do not calculate W(t) for the points in-between the values nΔt, the points we plot do lie precisely on a valid sample path, because we know precisely the probability density for each increment ΔW on the intervals Δt. If we chose Δt small enough, then we cannot tell by eye that the resolution is limited.
This book surveys the application of the recently developed technique of the wavelet transform to a wide range of physical fields, including astrophysics, turbulence, meteorology, plasma physics, atomic and solid state physics, multifractals occurring in physics, biophysics (in medicine and physiology) and mathematical physics. The wavelet transform can analyze scale-dependent characteristics of a signal (or image) locally, unlike the Fourier transform, and more flexibly than the windowed Fourier transform developed by Gabor fifty years ago. The continuous wavelet transform is used mostly for analysis, but the discrete wavelet transform allows very fast compression and transmission of data and speeds up numerical calculation, and is applied, for example, in the solution of partial differential equations in physics. This book will be of interest to graduate students and researchers in many fields of physics, and to applied mathematicians and engineers interested in physical application.
This book is an up-to-date introduction to univariate spectral analysis at the graduate level, which reflects a new scientific awareness of spectral complexity, as well as the widespread use of spectral analysis on digital computers with considerable computational power. The text provides theoretical and computational guidance on the available techniques, emphasizing those that work in practice. Spectral analysis finds extensive application in the analysis of data arising in many of the physical sciences, ranging from electrical engineering and physics to geophysics and oceanography. A valuable feature of the text is that many examples are given showing the application of spectral analysis to real data sets. Special emphasis is placed on the multitaper technique, because of its practical success in handling spectra with intricate structure, and its power to handle data with or without spectral lines. The text contains a large number of exercises, together with an extensive bibliography.
The statistical bootstrap is one of the methods that can be used to calculate estimates of a certain number of unknown parameters of a random process or a signal observed in noise, based on a random sample. Such situations are common in signal processing and the bootstrap is especially useful when only a small sample is available or an analytical analysis is too cumbersome or even impossible. This book covers the foundations of the bootstrap, its properties, its strengths and its limitations. The authors focus on bootstrap signal detection in Gaussian and non-Gaussian interference as well as bootstrap model selection. The theory developed in the book is supported by a number of useful practical examples written in MATLAB. The book is aimed at graduate students and engineers, and includes applications to real-world problems in areas such as radar and sonar, biomedical engineering and automotive engineering.
This chapter develops more tools for working with random variables. The probability generating function is the key tool for working with sums of nonnegative integer-valued random variables that are independent. When random variables are only uncorrelated, we can work with averages (normalized sums) by using the weak law of large numbers. We emphasize that the weak law makes the connection between probability theory and the every-day practice of using averages of observations to estimate probabilities of real-world measurements. The last two sections introduce conditional probability and conditional expectation. The three important tools here are the law of total probability, the law of substitution, and, for independent random variables, “dropping the conditioning.”
The foregoing concepts are developed here for discrete random variables, but they will all be extended to more general settings in later chapters.
Probability generating functions
In many problems we have a sum of independent random variables, and we would like to know the probability mass function of their sum. For example, in an optical communication system, the received signal might be Y = X + W, where X is the number of photoelectrons due to incident light on a photodetector, and W is the number of electrons due to dark current noise in the detector. An important tool for solving these kinds of problems is the probability generating function. The name derives from the fact that it can be used to compute the probability mass function.
Why do electrical and computer engineers need to study probability?
Probability theory provides powerful tools to explain, model, analyze, and design technology developed by electrical and computer engineers. Here are a few applications.
Signal processing. My own interest in the subject arose when I was an undergraduate taking the required course in probability for electrical engineers. We considered the situation shown in Figure 1.1. To determine the presence of an aircraft, a known radar pulse v(t) is sent out. If there are no objects in range of the radar, the radar's amplifiers produce only a noise waveform, denoted by Xt. If there is an object in range, the reflected radar pulse plus noise is produced. The overall goal is to decide whether the received waveform is noise only or signal plus noise. To get an idea of how difficult this can be, consider the signal plus noise waveform shown at the top in Figure 1.2. Our class addressed the subproblem of designing an optimal linear system to process the received waveform so as to make the presence of the signal more obvious. We learned that the optimal transfer function is given by the matched filter. If the signal at the top in Figure 1.2 is processed by the appropriate matched filter, we get the output shown at the bottom in Figure 1.2. You will study the matched filter in Chapter 10.
In Chapters 2 and 3, the only random variables we considered specifically were discrete ones such as the Bernoulli, binomial, Poisson, and geometric. In this chapter we consider a class of random variables allowed to take a continuum of values. These random variables are called continuous random variables and are introduced in Section 4.1. Continuous random variables are important models for integrator output voltages in communication receivers, file download times on the Internet, velocity and position of an airliner on radar, etc. Expectation and moments of continuous random variables are computed in Section 4.2. Section 4.3 develops the concepts of moment generating function (Laplace transform) and characteristic function (Fourier transform). In Section 4.4 expectation of multiple random variables is considered. Applications of characteristic functions to sums of independent random variables are illustrated. In Section 4.5 the Markov inequality, the Chebyshev inequality, and the Chernoff bound illustrate simple techniques for bounding probabilities in terms of expectations.
Densities and probabilities
Introduction
Suppose that a random voltage in the range [0,1) is applied to a voltmeter with a one-digit display. Then the display output can be modeled by a discrete random variable Y taking values .0, .1, .2, …, .9 with P(Y = k/10) = 1/10 for k = 0, …, 9.