To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we will discuss the notion of the predictability of a system evolving over time or, strictly speaking, of a signal emitted by such a system. Forecasting future values of some quantity is a classical problem in time series analysis but the conceptual importance of the prediction problem is not limited to those who want to get rich by knowing tomorrow's exchange rates. Even if, instead, you are interested in describing, understanding or classifying signals, stay with us for a few pages.
In this book we are concerned with the detection and quantification of possibly complicated structures in a signal. We want to be able to convince others that the structures we find are real and not just fluctuations. The most convincing argument for the presence of some pattern is if it can be used to give an improved prediction. It is a necessary condition for a theory to be compatible with the known data but it is not sufficient. In order to become accepted, a theory must successfully predict something which can be verified subsequently. In time series analysis, we can take this requirement of predictive quality quite literally.
Most concepts, which we will introduce later in order to describe time series data, can be interpreted to some extent as indirect measures of predictability. Due to their indirect nature, some conclusions will remain controversial, especially if the structures are rather faint. The statistically significant ability to predict the signal better than other techniques do will then be a more convincing affirmation of nonlinear and deterministic structure than several dubious digits of the fractal dimension.
In the preceding two chapters we established algorithms to estimate the Lyapunov exponent and the correlation dimension from a time series. We tried to be very strict about the conditions which must be met in order to justify such estimates. The data quality and quantity had to be sufficient to observe clear scaling regions. The implied requirement that the data must be deterministic to a good approximation is also valid for successful nonlinear predictions (Chapter 4). If this were the whole story, the scope of these methods would be quite limited. In the main, well-controlled laboratory data from experiments which have been designed to show deterministic chaos would qualify. Although these include some very interesting signals, many other data sets for which classical, linear time series methods seem inappropriate do not fall into this class.
Indeed, there is a continuous stream of publications reporting more or less successful attempts to apply nonlinear algorithms, in particular the correlation dimension, to field data. Examples range from population dynamics in biology, stock exchange rates in economy, and time dependent hormone secretion or ECG and EEG signals in medicine to geophysical records of the earth's magnetic field or the variable luminosity of astronomical objects. In particular the interpretation of the results as measures of the “complexity” of the underlying systems has met with increasing criticism. It is now quite generally agreed that, in the absence of clear scaling behaviour, quantities derived from dimension or Lyapunov estimators can be at most relative measures of system properties. But even then it is not clear which properties are really measured.
In the first part of this book we introduced two important concepts to characterise deterministic chaos: the maximal Lyapunov exponent and the correlation dimension. We stressed that one of the main reasons for their relevance is the invariance under smooth transformations of the state space. Irrespective of the details of the measurement process and of the reconstruction of the state space, they will always assume the same values. Of course, this is strictly true only for ideal, noise-free and infinitely long time series, but a good algorithm applied to an approximately noise-free and sufficiently long data set should yield results which are robust against small changes in the parameters of the algorithm.
The maximal Lyapunov exponent and the correlation dimension are only two members of a large family of invariants, singled out mainly because they are the two quantities which can best be computed from experimental data. In this chapter we want to introduce a more complete set of invariants which characterises the stability of trajectories and the geometrical and information theoretical properties of the invariant measure on an attractor. These are the spectrum of Lyapunov exponents and the generalised dimensions and entropies. These quantities possess interesting interrelations, the Kaplan–Yorke formula and Pesin's identity. Since these relations provide cross-checks of the numerical estimates, they are of considerable importance for a consistent time series analysis in terms of nonlinear statistics.
In a field as dynamic as nonlinear science, new ideas, methods and experiments emerge constantly and the focus of interest shifts accordingly. There is a continuous stream of new results, and existing knowledge is seen from a different angle after very few years. Five years after the first edition of “Nonlinear Time Series Analysis” we feel that the field has matured in a way that deserves being reflected in a second edition.
The modification that is most immediately visible is that the program listings have been be replaced by a thorough discussion of the publicly available software TISEAN. Already a few months after the first edition appeared, it became clear that most users would need something more convenient to use than the bare library routines printed in the book. Thus, together with Rainer Hegger we prepared stand-alone routines based on the book but with input/output functionality and advanced features. The first public release was made available in 1998 and subsequent releases are in widespread use now. Today, TISEAN is a mature piece of software that covers much more than the programs we gave in the first edition. Now, readers can immediately apply most methods studied in the book on their own data using TISEAN programs. By replacing the somewhat terse program listings by minute instructions of the proper use of the TISEAN routines, the link between book and software is strengthened, supposedly to the benefit of the readers and users. Hence we recommend a download and installation of the package, such that the exercises can be readily done by help of these ready-to-use routines.
When we try to build a model of a system, we usually have the ultimate goal of establishing the equations of motion which describe the underlying system in terms of meaningful quantities. Writing down the behaviour of the relevant components of the system in a mathematical language, we try to combine all we know about their actions and interactions. This approach may allow one to construct a simplified but useful image of what happens in nature. Most of the knowledge we have about the inherent mechanisms has been previously derived from measurements. We call such models phenomenological models. In some cases, as in classical mechanics, one is able to derive a dynamical model from first principles, but even the so-called first principles have to be consistent with the empirical observations.
In order to establish a useful phenomenological model, one needs specialised knowledge about the system under study. Therefore, the right place to explain how to make a model for the time variations of some animal population is a book on population biology. On the other hand, there are techniques for constructing models that are based almost purely on time series data. These techniques are generally applicable and thus we consider their treatment appropriate for this book.
The problem treated in this chapter lies at the heart of time series analysis. What can we infer about the dynamical laws governing a system, given a sequence of observations of one or a few time variable characteristics of the system? We suppose that the external knowledge about the system is limited to some assumptions that we may make about the general structure of these laws.
Regarding applications, chaos control is surely one of the most exciting outcomes of the theory of dynamical systems. [See Ott & Spano (1995) for a nontechnical account.] There exists an impressive list of experiments where chaos control has been applied successfully. Examples include laser systems, chemical and mechanical systems, a magneto-elastic ribbon, and several others. Additionally, there are claims that the same mechanism also works in the control of biological systems such as the heart or the brain. After the pioneering work of Ott, Grebogi & Yorke (1990), often referred to as the “OGY method”, a number of modifications have been proposed. We want to focus here on the original method and only briefly review some modifications which can simplify experimental realisations. We give only a few remarks here on the time series aspects of chaos control technology. For further practical hints, including experimental details, the reader is asked to consult the rich original literature. (See “Further reading” below.)
In most technical environments chaos is an undesired state of the system which one would like to suppress. Think, for instance, of a laser which performs satisfactorily at some constant output power. To increase the power the pumping rate is raised. Suddenly, due to some unexpected bifurcation, the increased output starts to fluctuate in a chaotic fashion. Even if the average of the chaotic output is larger than the highest stable steady output, such a chaotic output is probably not desired. Chaos control can help to re-establish at least a regularly oscillating output at a higher rate, with judiciously applied minimal perturbations.
In the preceding sections we have already discussed the properties of chaotic dynamics, namely instability and self-similarity. But nonlinear systems possess a much richer phenomenology than just plain chaos. In fact, non-chaotic motion on stable limit cycles is very typical in real world systems. Hence we devote the first section to them. Synchronisation, which is also a nonlinear phenomenon and clearly related to ordered motion, will be discussed in Section 14.3. The aspects we describe in the following sections can be roughly divided into two classes. The first consists of problems we may have with the reproducibility or stationarity of the signal even though all system parameters are kept fixed. Examples are transient behaviour and intermittency. The other group comprises what can happen under changes of the parameters. We will discuss the various types of attractor and how transitions between them occur, the bifurcations.
Robustness and limit cycles
When speaking of nonlinear dynamics, the immediate associations are bifurcation scenarios and chaos, i.e., the sensitivity of solutions either to small changes of the system parameters or to small perturbations of the system's state vector. What is often overlooked is that a much more typical feature of nonlinear systems is the existence of stable limit cycles, which are often very robust against all kinds of perturbations.
When observing a periodic signal, a Fourier transform may be a very good representation of its features. As is well known, the power spectrum (or the auto-correlation function, respectively), uniquely determines the corresponding linear model.
In the preceding chapter we discussed the dynamical side of chaos which manifests itself in the sensitive dependence of the evolution of a system on its initial conditions. This strange behaviour in time of a deterministically chaotic system has its counterpart in the geometry of the set in phase space formed by the (non-transient) trajectories of the system, the attractor.
Attractors of dissipative chaotic systems (the kind of systems we are interested in) generally have a very complicated geometry, which led people to call them strange. However, strange sets can also occur without dissipation in more general settings. As we have pointed out already in Chapter 3, a system described by autonomous differential equations (a flow) cannot be chaotic in less than three dimensions. With the same argument that trajectories are not allowed to intersect in a deterministic system we can conclude that not only the phase space but also the attractor of a chaotic flow must be more than two dimensional. However, slightly more than two dimensions is sufficient and the motion on a 2 + ∈ dimensional fractal can indeed be chaotic. As we will see, strange attractors with fractional dimensions are typical of chaotic systems. Map-like systems can of course show chaos with attractor dimensions less than two. Noninteger dimensions are assigned to geometrical objects which exhibit an unusual kind of self-similarity and which show structure on all length scales.
Example 6.1 (Self-similarity of the NMR laser attractor). Such self-similarity is demonstrated in Fig. 6.1 for an attractor reconstructed from the NMR laser time series, Appendix B.2.
In Chapter 3 we have concentrated on geometric aspects of chaos. In particular, we have discussed the fractal dimension characterization of strange attractors and their natural invariant measures, as well as issues concerning phase space dimensionality and embedding. In this chapter we concentrate on the time evolution dynamics of chaotic orbits. We begin with a discussion of the horseshoe map and symbolic dynamics.
The horseshoe map and symbolic dynamics
The horseshoe map was introduced by Smale (1967) as a motivating example in his development of symbolic dynamics as a basis for understanding a large class of dynamical systems. The horseshoe map Mh is specified geometrically in Figure 4.1. The map takes the square S (Figure 4.1(a)), uniformly stretches it vertically by a factor greater than 2 and uniformly compresses it horizontally by a factor less then ½ (Figure 4.1(b)). Then the long thin strip is bent into a horseshoe shape with all the bending deformations taking place in the cross-hatched regions of Figures 4.1(b) and (c). Then the horseshoe is placed on top of the original square, as shown in Figure 4.1(d). Note that a certain fraction, which we denote 1 − f, of the original area of the square S is mapped to the region outside the square. If initial conditions are spread over the square with a distribution which is uniform in the vertical direction, then the fraction of initial conditions that generate orbits that do not leave S during n applications of the map is just fn.
We have already encountered situations where chaotic motion was non-attracting. For example, the map Eq. (3.3) had an invariant Cantor set in [0, 1], but all initial conditions except for a set of Lebesgue measure zero eventually leave the interval [0, 1] and then approach x = ±∞. Similarly, the horseshoe map has an invariant set in the square S (cf Figure 4.1), but again all initial conditions except for a set of Lebesgue measure zero eventually leave the square. The invariant sets for these two cases are examples of nonattracting chaotic sets. While it is clear that chaotic attractors have practically important observable consequences, it may not at this point be clear that nonattracting chaotic sets also have practically important observable consequences. Perhaps the three most prominent consequences of nonattracting chaotic sets are the phenomena of chaotic transients, fractal basin boundaries, and chaotic scattering.
The term chaotic transient refers to the fact that an orbit can spend a long time in the vicinity of a nonattracting chaotic set before it leaves, possibly moving off to some nonchaotic attractor which governs its motion ever after. During the initial phase, when the orbit is in the vicinity of the nonattracting chaotic set, its motion can appear to be very irregular and is, for most purposes, indistinguishable from motion on a chaotic attractor.
Say we sprinkle a large number of initial conditions with a uniform distribution in some phase space region W containing the nonattracting chaotic set.
Chaotic dynamics may be said to have started with the work of the French mathematician Henri Poincaré at about the turn of the century. Poincaré's motivation was partly provided by the problem of the orbits of three celestial bodies experiencing mutual gravational attraction (e.g., a star and two planets). By considering the behavior of orbits arising from sets of initial points (rather than focusing on individual orbits), Poincaré was able to show that very complicated (now called chaotic) orbits were possible. Subsequent noteworthy early mathematical work on chaotic dynamics includes that of G. Birkhoff in the 1920s, M. L. Cartwright and J. E. Littlewood in the 1940s, S. Smale in the 1960s, and Soviet mathematicians, notably A. N. Kolmogorov and his coworkers. In spite of this work, however, the possibility of chaos in real physical systems was not widely appreciated until relatively recently. The reasons for this were first that the mathematical papers are difficult to read for workers in other fields, and second that the theorems proven were often not strong enough to convince researchers in these other fields that this type of behavior would be important in their systems. The situation has now changed drastically, and much of the credit for this can be ascribed to the extensive numerical solution of dynamical systems on digital computers. Using such solutions, the chaotic character of the time evolutions in situations of practical importance has become dramatically clear.