To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Thermodynamics of the Atmosphere is the second volume in the series A Course in Theoretical Meteorology. In the first volume, entitled Dynamics of the Atmosphere, we have covered many of the essential topics of atmospheric motion, but we did not provide any theory from thermodynamics. Whenever information from thermodynamics was required for the development of various topics of dynamic meteorology, we have carefully stated the basic facts without giving any theoretical background. The reader of this book will clearly recognize that Dynamics of the Atmosphere and Thermodynamics of the Atmosphere are so closely connected that one textbook without the other would give a very incomplete picture of atmospheric weather systems.
We have tried to make this book as self-contained as possible. Whenever some basic facts from atmospheric dynamics are required, we have simply stated these omitting any theoretical considerations. Occasionally, however, it might be desirable, to provide some additional background. In this case we make reference to our book Dynamics of the Atmosphere, which will be abbreviated by DA. Of course, other textbooks might be just as satisfactory, but it would be more difficult for the reader to extract the necessary information in the desired form.
Thermodynamics of the Atmosphere is written for advanced undergraduate and recently graduated students of meteorology and related disciplines. The book contains a sufficient amount of both theory and applications to provide a solid foundation for more advanced studies.
It is well known that all natural physical processes are irreversible. Three brief examples will demonstrate this.
(i) It has never been observed that heat flowing from a warmer to a colder system will suddenly change its direction and flow from the colder to the warmer system. Nevertheless, the first law of thermodynamics does not prohibit this reversal of direction.
(ii) Consider a system consisting of two chambers. One of these is filled with a gas, the second chamber is completely evacuated. If the separating wall is pierced, a mass flow will take place until the pressure in both chambers is the same. It has never been observed that the original situation was restored by a return flow.
(iii) A stone is dropped into a water container resulting in an increase of the internal energy of the water container and, therefore, of its temperature. It never happens that the water container cools off spontaneously using the change of the internal energy to expel the stone.
In case of irreversible processes, the original state can be restored only by means of interactions with other systems which then suffer a remaining change. For example, to restore the original system in the second example, energy in form of work is required to evacuate the second chamber.
Irreversible processes taking place in isolated systems run in one direction only and thus provide the possibility of discerning between the past, the present and the future.
Quite generally, a scientific measurement of any kind is in principle more useful the more it is reproducible. We need to know that the numbers we measure correspond to properties of the studied object, up to some measurement error. In the case of time series measurements, reproducibility is closely connected to two different notions of stationarity.
The weakest but most evident form of stationarity requires that all parameters that are relevant for a system's dynamics have to be fixed and constant during the measurement period (and these parameters should be the same when the experiment is reproduced). This is a requirement to be fulfilled not only by the experimental set-up but also by the process taking place in this fixed environment. For the moment this might be puzzling since one usually expects that constant external parameters induce a stationary process, but in fact we will confront you in several places in this book with situations where this is not true. If the process under observation is a probabilistic one, it will be characterised by probability distributions for the variables involved. For a stationary process, these probabilities may not depend on time. The same holds if the process is specified by a set of transition probabilities between different states. If there are deterministic rules governing the dynamics, these rules must not change during the time covered by a time series.
In some cases, we can handle a simple change of a parameter once this change is noticed. If the calibration of the measurement apparatus drifts, for example, we can try to rescale the data continuously in order to keep the mean and variance constant.
You are probably reading this book because you have an interesting source of data and you suspect it is not a linear one. Either you positively know it is nonlinear because you have some idea of what is going on in the piece of world that you are observing or you are led to suspect that it is because you have tried linear data analysis and you are unsatisfied with its results.
Linear methods interpret all regular structure in a data set, such as a dominant frequency, through linear correlations (to be defined in Chapter 2 below). This means, in brief, that the intrinsic dynamics of the system are governed by the linear paradigm that small causes lead to small effects. Since linear equations can only lead to exponentially decaying (or growing) or (damped) periodically oscillating solutions, all irregular behaviour of the system has to be attributed to some random external input to the system. Now, chaos theory has taught us that random input is not the only possible source of irregularity in a system's output: nonlinear, chaotic systems can produce very irregular data with purely deterministic equations of motion in an autonomous way, i.e., without time dependent inputs. Of course, a system which has both, nonlinearity and random input, will most likely produce irregular data as well.
Although we have not yet introduced the tools we need to make quantitative statements, let us look at a few examples of real data sets. They represent very different problems of data analysis where one could profit from reading this book since a treatment with linear methods alone would be inappropriate.
The paradigm of deterministic chaos has influenced thinking in many fields of science. As mathematical objects, chaotic systems show rich and surprising structures. Most appealing for researchers in the applied sciences is the fact that deterministic chaos provides a striking explanation for irregular behaviour and anomalies in systems which do not seem to be inherently stochastic.
The most direct link between chaos theory and the real world is the analysis of time series from real systems in terms of nonlinear dynamics. On the one hand, experimental technique and data analysis have seen such dramatic progress that, by now, most fundamental properties of nonlinear dynamical systems have been observed in the laboratory. On the other hand, great efforts are being made to exploit ideas from chaos theory in cases where the system is not necessarily deterministic but the data displays more structure than can be captured by traditional methods. Problems of this kind are typical in biology and physiology but also in geophysics, economics, and many other sciences.
In all these fields, even simple models, be they microscopic or phenomenological, can create extremely complicated dynamics. How can one verify that one's model is a good counterpart to the equally complicated signal that one receives from nature? Very often, good models are lacking and one has to study the system just from the observations made in a single time series, which is the case for most non-laboratory systems in particular. The theory of nonlinear dynamical systems provides new tools and quantities for the characterisation of irregular time series data.
The most striking feature of chaos is the unpredictability of the future despite a deterministic time evolution. This has already been made evident in Fig. 1.2: the average error made when forecasting the outcome of a future measurement increases very rapidly with time, and in this system predictability is almost lost after only 20 time steps. Nevertheless we claim that these experimental data are very well described as a low dimensional deterministic system. How can we explain this apparent contradiction?
Example 5.1 (Divergence of NMR laser trajectories). In Fig. 5.1 we show several segments of the NMR laser time series (the same data underlying Fig. 1.2; see Appendix B.2) which are initially very close. Over the course of time they separate and finally become uncorrelated. Thus it is impossible to predict the position of the trajectory more than, say, ten time steps ahead, knowing the position of another trajectory at this time which was very close initially. (This is very much in the spirit of the prediction scheme of Section 4.2.)
The above example illustrates that our every day experience, “similar causes have similar effects”, is invalid in chaotic systems except for short periods, and only a mathematically exact reproduction of some event would yield the same result due to determinism. Note that this has nothing to do with any unobserved influence on the system from outside (although in experimental data it is always present) and can be found in every mathematical model of a chaotic system.
This unpredictability is a consequence of the inherent instability of the solutions, reflected by what is called sensitive dependence on initial conditions.
The reason for the predominance of scalar observation lies partly in experimental limitations. Also the tradition of spectral time series analysis may have biased experimentalists to concentrate on analysing single measurement channels at a time. One example of a multivariate measurement is the vibrating string data [Tufillaro et al. (1995)] that we use in this book; see Appendix B.3. In this case, the observables represent variables of a physical model so perfectly that they can be used as state vectors without any complication. In distributed systems, however, the mutual relationship of different simultaneously recorded variables is much less clear. Examples of this type are manifold in physiology, economics or climatology, where multivariate time series occur very frequently. Such systems are generally quite complicated and a systematic investigation of the interrelation between the observables from a different than a time series point of view is difficult. The different aspects which we will discuss in this chapter are relevant in exactly such situations.
Measures for interdependence
As pointed out before, a first question in the analysis of simultaneously recorded observables is whether they are independent.
Example 14.1 (Surface wind velocities). Let our bivariate time series be a recording of the x-component and the y-component of the wind speed measured at some point on the earth's surface. In principle, these could represent two independent processes. Of course, a more reasonable hypothesis would be that the modulus of the wind speed and the angle of the velocity vector are the independent processes, and hence both x and y share some information of both.
Throughout the text we have tried to illustrate all relevant issues by the help of experimental data sets, some of them appearing in several different contexts. In order to avoid repeats and to concentrate on the actual topic we did not describe the data and the systems they come from in any detail in the examples given in the text. This leeway we want to make up in this appendix, together with a list of all places where each set is referred to.
Lorenz-like chaos in an NH3 laser
This data set was created at the PTB Braunschweig in Germany in an experiment run by U. Hübner, N. B. Abraham, C. O. Weiss and collaborators (1993). Within the time series competition organised in 1992 by N. A. Gershenfeld and A. Weigend at the Santa Fe Institute it served as one of the sample series and is available on the SFI server by anonymous FTP to sfi.santafe.edu.
A paradigmatic mathematical model for low dimensional chaos is the Lorenz system, Lorenz (1969), describing the convective motion of a fluid heated from below in a Rayleigh–Benard cell. Haken (1975) showed that under certain conditions a laser can be described by exactly the same equations, only the variables and constants have different physical meaning. The experiment in Braunschweig was designed to fulfil the conditions of being describable by the Lorenz–Haken equations as closely as possible.
The time series is a record of the output power of the laser, consisting of 10 000 data items. Part of it is shown in Fig. B.1. Similarly to the Lorenz model, the system exhibits regular oscillations with slowly increasing amplitude.
The reconstruction of a vector space which is equivalent to the original state space of a system from a scalar time series is the basis of almost all of the methods in this book. Obviously, such a reconstruction is required for all methods exploiting dynamical (such as determinism) or metric (such as dimensions) state space properties of the data. In the first part of the book we introduced the time delay embedding as the way to find such a space. Because of the outstanding importance of the state space reconstruction we want to devote the first section of this chapter to a deeper mathematical understanding of this aspect. In the following sections we want to discuss modifications known as filtered embeddings, the problem of unevenly sampled data, and the possibility of reconstructing state space equivalents from multichannel data.
Embedding theorems
A scalar measurement is a projection of the unobserved internal variables of a system onto an interval on the real axis. Apart from this reduction in dimensionality the projection process may be nonlinear and may mix different internal variables, giving rise to additional distortion of the output. It is obvious that even with a precise knowledge of the measurement process it may be impossible to reconstruct the state space of the original system from the data. Fortunately, a reconstruction of the original phase space is not really necessary for data analysis and sometimes not even desirable, namely, when the attractor dimension is much smaller than the dimension of this space.
The nonlinear time series methods discussed in this book are motivated and based on the theory of dynamical systems; that is, the time evolution is defined in some phase space. Since such nonlinear systems can exhibit deterministic chaos, this is a natural starting point when irregularity is present in a signal. Eventually, one might think of incorporating a stochastic component into the description as well. So far, however, we have to assume that this stochastic component is small and essentially does not change the nonlinear properties. Thus all the successful approaches we are aware of either assume the nonlinearity to be a small perturbation of an essentially linear stochastic process, or they regard the stochastic element as a small contamination of an essentially deterministic, nonlinear process. If a given data set is supposed to stem from a genuinely non-linear stochastic processes, time series analysis tools are still very limited and their discussion will be postponed to Section 12.1.
Consider for a moment a purely deterministic system. Once its present state is fixed, the states at all future times are determined as well. Thus it will be important to establish a vector space (called a state space or phase space) for the system such that specifying a point in this space specifies the state of the system, and vice versa. Then we can study the dynamics of the system by studying the dynamics of the corresponding phase space points. In theory, dynamical systems are usually defined by a set of first-order ordinary differential equations (see below) acting on a phase space.
In this chapter we will discuss programs in the TISEAN software package that correspond to algorithms described in various sections throughout this book. We will skip over more standard and utility routines which you will find documented in most software packages for statistics and data analysis. Rather, we will give some background information on essential nonlinear methods which can rarely be found otherwise, and we will give hints to the usage of the TISEAN programs and to certain choices of parameters.
The TISEAN package has grown out of our efforts to publicise the use of nonlinear time series methods. Some of the first available programs were based on the code that was printed in the first edition of this book. Many more have been added and some of the initial ones have been superseded by superior implementations. The TISEAN package has been written by Rainer Hegger and the authors of this book and is publicly available via the Internet from http://www.mpipks-dresden. mpg.de/∼tisean
Our common aim was to spare the user the effort of coding sophisticated numerical software in order to just try and analyse their data. There is no way, however, to spare you the effort of understanding the methods. Therefore, still none of the programs can be used as a black box routine. Programs that would implement all necessary precautions (such as tests for stationarity, estimates of the minimal required number of points, etc.) would in many cases refuse to attempt, say, a dimension estimate. But, even then, we suspect that such a library of nonlinear time series analysis tools would rather promote the misuse of nonlinear concepts than provide a deeper understanding of complex signals.
All experimental data are to some extent contaminated by noise. That this is an undesirable feature is commonplace. (By definition, noise is the unwanted part of the data.) But how bad is noise really? The answer is as usual: it depends. The nature of the system emitting the signal and the nature of the noise determine whether the noise can be separated from the clean signal, at least to some extent. This done, the amount of noise introduces limits on how well a given analysing task (prediction, etc.) can be carried out.
In order to focus the discussion in this chapter on the influence of the noise, we will assume throughout that the data are otherwise considerably well behaved. By this we mean that the signal would be predictable to some extent by exploiting an underlying deterministic rule – were it not for the noise. This is the case for data sets which can be embedded in a low dimensional phase space, which are stationary and which are not too short. Violation of any one of these requirements leads to further complications which will not be addressed in this chapter.
Measurement noise and dynamical noise
When talking about noise in a data set we have to make an important distinction between terms. Measurement noise refers to the corruption of observations by errors which are independent of the dynamics. The dynamics satisfy xn+1 = F(xn), but we measure scalars sn = s(xn) + ηn, where s(x) is a smooth function that maps points on the attractor to real numbers, and the ηn are random numbers.