To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The initial research into suprathreshold stochastic resonance described in Chapter 4 considers the viewpoint of information transmission. As discussed briefly in Chapter 4, the suprathreshold stochastic resonance effect can also be modelled as stochastic quantization, and therefore results in nondeterministic lossy compression of a signal. The reason for this is that the effect of independently adding noise to a common signal before thresholding the result a number of times, with the same static threshold value, is equivalent to quantizing a signal with random thresholds. This observation leads naturally to measuring and describing the performance of suprathreshold stochastic resonance with standard quantization theory. In a context where a signal is to be reconstructed from its quantized version, this requires a reproduction value or reproduction point to be assigned to each possible state of a quantized signal. The quantizing operation is often known as the encoding of a signal, and the assignment of reproduction values as the decoding of a signal. This chapter examines various methods for decoding the output of the suprathreshold stochastic resonance model, and evaluates the performance of each technique as the input noise intensity and array size change. As it is the performance criterion most often used in conventional quantization theory, the measure used is the mean square error distortion between the original input signal and the decoded output signal.
Introduction
We begin this chapter by very briefly reviewing the SSR model introduced in Chapter 4. We then introduce the concept of decoding the output of a quantizer's encoding to reconstruct the input signal, and consider measuring the performance of such a reconstruction.
To conclude this book, we summarize our main results and conclusions, before briefly speculating on the most promising areas for future research.
Putting it all together
Stochastic resonance
Chapter 2 presents a historical review and elucidation of the major epochs in the history of stochastic resonance (SR) research, and discussion of the evolution of the term ‘stochastic resonance’. A list of the main controversies and debates associated with the field is given.
Chapter 2 also demonstrates qualitatively that SR can actually occur in a single threshold device, where the threshold is set to the signal mean. Although SR cannot occur in the conventional signal-to-noise ratio (SNR) measure in this situation, if ensemble averaging is allowed, then the presence of an optimal noise level can decrease distortion.
Furthermore, Chapter 2 contains a discussion and critique of the use of SNR measures to quantify SR, the debate about SNR gains due to SR, and the relationship between SNRs and information theory.
Suprathreshold stochastic resonance
Chapter 4 provides an up-to-date literature review of previous work on suprathreshold stochastic resonance (SSR). It also gives numerical results, showing SSR occurring for a number of matched and mixed signal and noise distributions not previously considered. A generic change of variable in the equations used to determine the mutual information through the SSR model is introduced. This change of variable results in a probability density function (PDF) that describes the average transfer function of the SSR model.
As described and illustrated in Chapters 4–7, a form of stochastic resonance called suprathreshold stochastic resonance can occur in a model system where more than one identical threshold device receives the same signal, but is subject to independent additive noise. In this chapter, we relax the constraint in this model that each threshold must have the same value, and aim to find the set of threshold values that either maximizes the mutual information, or minimizes the mean square error distortion, for a range of noise intensities. Such a task is a stochastic optimal quantization problem. For sufficiently large noise, we find that the optimal quantization is achieved when all thresholds have the same value. In other words, the suprathreshold stochastic resonance model provides an optimal quantization for small input signal-to-noise ratios.
Introduction
The previous four chapters consider a form of stochastic resonance, known as suprathreshold stochastic resonance (SSR), which occurs in an array of identical noisy threshold devices. The noise at the input to each threshold device is independent and additive, and this causes a randomization of effective threshold values, so that all thresholds have unique, but random, effective values. Chapter 4 discusses and extends Stocks' result (Stocks 2000c) that the mutual information between the SSR model's input and output signals is maximized for some nonzero value of noise intensity. Chapter 6 considers how to reconstruct an approximation of the input signal by decoding the SSR model's output signal.
Stochastic resonance (SR), being an interdisciplinary and evolving subject, has seen many debates. Indeed, the term SR itself has been difficult to comprehensively define to everyone's satisfaction. In this chapter we look at the problem of defining stochastic resonance, as well as exploring its history. Given that the bulk of this book is focused on suprathreshold stochastic resonance (SSR), we give particular emphasis to forms of stochastic resonance where thresholding of random signals occurs. An important example where thresholding occurs is in the generation of action potentials by spiking neurons. In addition, we outline and comment on some of the confusions and controversies surrounding stochastic resonance and what can be achieved by exploiting the effect. This chapter is intentionally qualitative. Illustrative examples of stochastic resonance in threshold systems are given, but fuller mathematical and numerical details are left for subsequent chapters.
Introducing stochastic resonance
Stochastic resonance, although a term originally used in a very specific context, is now broadly applied to describe any phenomenon where the presence of internal noise or external input noise in a nonlinear system provides a better system response to a certain input signal than in the absence of noise. The key term here is nonlinear. Stochastic resonance cannot occur in a linear system – linear in this sense means that the output of the system is a linear transformation of the input of the system. A wide variety of performance measures have been used – we shall discuss some of these later.
By definition, signal or data quantization schemes are noisy in that some information about a measurement or variable is lost in the process of quantization. Other systems are subject to stochastic forms of noise that interfere with the accurate recovery of a signal, or cause inaccuracies in measurements. However stochastic noise and quantization can both be incredibly useful in natural processes or engineered systems. As we saw in Chapter 2, one way in which noisy behaviour can be useful is through a phenomenon known as stochastic resonance (SR). In order to relate SR and signal quantization, this chapter provides a brief history of standard quantization theory. Such results and research have come mainly from the electronic engineering community, where quantization needs to be understood for the very important process of analogue-to-digital conversion – a fundamental requirement for the plethora of digital systems in the modern world.
Information and quantization theory
Analogue-to-digital conversion (ADC) is a fundamental stage in the electronic storage and transmission of information. This process involves obtaining samples of a signal, and their quantization to one of a finite number of levels.
According to the Australian Macquarie Dictionary, the definition of the word ‘quantize’ is
1. Physics: a. to restrict (a variable) to a discrete value rather than a set of continuous values. b. to assign (a discrete value), as a quantum, to the energy content or level of a system. 2. Electronics: to convert a continuous signal waveform into a waveform which can have only a finite number (usually two) of values.
This chapter discusses the behaviour of the mutual information and channel capacity in the suprathreshold stochastic resonance model as the number of threshold elements becomes large or approaches infinity. The results in Chapter 4 indicate that the mutual information and channel capacity might converge to simple expressions of N in the case of large N. The current chapter finds that accurate approximations do indeed exist in the large N limit. Using a relationship between mutual information and Fisher information, it is shown that capacity is achieved either (i) when the signal distribution is Jeffrey's prior, a distribution which is entirely dependent on the noise distribution, or (ii) when the noise distribution depends on the signal distribution via a cosine relationship. These results provide theoretical verification and justification for previous work in both computational neuroscience and electronics.
Introduction
Section 4.4 of Chapter 4 presents results for the mutual information and channel capacity through the suprathreshold stochastic resonance (SSR) model shown in Fig. 4.1. Recall that σ is the ratio of the noise standard deviation to the signal standard deviation. For the case of matched signal and noise distributions and a large number of threshold devices, N, the optimal value of σ – that is, the value of σ that maximizes the mutual information and achieves channel capacity – appears to asymptotically approach a constant value with increasing N. This indicates that analytical expressions might exist in the case of large N for the optimal noise intensity and channel capacity.
Chapters 1 to 6 laid down the fundamental physical and mathematical concepts pertaining to thermodynamics. While we always kept the discussion close to our atmosphere, it was not until Chapter 7 that, in a more applied mood, we presented how these concepts are applied to yield quantities useful to atmospheric processes. However, even if we understand all the mathematics, we still need an efficient way to present and visualize thermodynamic processes in the atmosphere. Thermodynamic diagrams can do that. Up to this point we made an effort to visualize the thermodynamic process using a (p, V) or a (p, T) diagram. However, such diagrams, while simple, may not be very convenient when it comes to their utilization.
Since the purpose of a diagram is efficiently and clearly to display processes and estimate thermodynamic quantities, the following are very desirable in a thermodynamic diagram: (1) for every cyclic process the area should be proportional to work done or energy (area-equivalent transformations), (2) lines should be straight (easy to use), and (3) the angle between adiabats and isotherms should be as large as possible (easy to distinguish). The (p, V) diagram satisfies the first condition (pda = dw), but the angle between isotherms and adiabats is not very large (Figure 4.5(a)). Because of this, while it is used for illustration purposes, this diagram is not used in practice.
Conditions for area-equivalent transformations
When we are constructing a new diagram, in effect, we go from the x = a, y = -p domain to a new domain characterized by two new coordinates, say u and w.
The fundamental equations derived in the previous chapter can only be applied to closed systems (i.e. to systems that do not exchange mass) that are homogeneous (i.e. they involve just one phase). In such cases we do not need to specify how thermodynamic functions depend on the composition of the system. Only two independent variables (T and p, or p and V, or p and T) need to be known. Since the total mass (m) remains constant, if we know the values of the extensive variables per unit mass we can extend the equations to any mass by multiplying by m or by n (the number of moles).
A heterogeneous system involves more than a single phase. In this case we are concerned with the conditions of internal equilibrium between the phases. Even if the heterogeneous system is assumed to be (as a whole) a closed system the phases constitute homogeneous but open “subsystems” which can exchange mass between them. In this case the fundamental equations must be modified to include extra terms to account for the mass exchanges. These extra terms involve a function μ called the chemical potential, μ = μ(T, p). We will not go into the details of defining μ; we will only accept that in the case of open systems something else must be included to account for the heterogeneity of the system. In this book we are concerned with a heterogeneous system that involves dry air (N2, O2, CO2, Ar) and water, with the water existing in vapor and possibly one of the condensed phases (water or ice).
The previous chapters have presented the basics in atmospheric thermodynamics. As we know, in atmospheric sciences the ultimate goal is to predict as accurately as possible the changes in weather and climate. Thermodynamic processes are crucial in predicting changes in weather patterns. For example, during cloud and precipitation formation vast amounts of heat are exchanged with the environment that affect the atmosphere at many different spatial scales. In this final chapter we will present the basic concepts behind predicting weather changes. This chapter is not meant to treat the issue thoroughly, but only to offer a glimpse of what comes next.
Basic predictive equations in the atmosphere
In the Newtonian framework the state of the system is described exactly by the position and velocity of all its constituents. In the thermodynamical framework the state is defined by the temperature, pressure, and density of all its constituents. In a dynamical system such as the climate system both frameworks apply. Accordingly, a starting point in describing such a system will be to seek a set of equations that combine both the mechanical motion and thermodynamical evolution of the system.
The fundamental equations that govern the motion and evolution of the atmosphere (and for that matter of the oceans and sea ice) are derived from the three basic conservation laws: the conservation of momentum, the conservation of mass, and the conservation of energy. For the atmosphere the equation of state relates temperature, density, and pressure.
This book is intended for a semester undergraduate course in atmospheric thermodynamics. Writing it has been in my mind for a while. The main reason for wanting to write a book like this was that, simply, no such text in atmospheric thermodynamics exists. Do not get me wrong here. Excellent books treating the subject do exist and I have been positively influenced and guided by them in writing this one. However, in the past, atmospheric thermodynamics was either treated at graduate level or at undergraduate level in a partial way (using part of a general book in atmospheric physics) or too fully (thus making it difficult to fit it into a semester course). Starting from this point, my idea was to write a self-contained, short, but rigorous book that provides the basics in atmospheric thermodynamics and prepares undergraduates for the next level. Since atmospheric thermodynamics is established material, the originality of this book lies in its concise style and, I hope, in the effectiveness with which the material is presented. The first two chapters provide basic definitions and some useful mathematical and physical notes that we employ throughout the book. The next three chapters deal with more or less classical thermodynamical issues such as basic gas laws and the first and second laws of thermodynamics. In Chapter 6 we introduce the thermodynamics of water, and in Chapter 7 we discuss in detail the properties of moist air and its role in atmospheric processes. In Chapter 8 we discuss atmospheric stability, and in Chapter 9 we introduce thermodynamic diagrams as tools to visualize thermodynamic processes in the atmosphere and to forecast storm development.