To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Everything should be made as simple as possible, but not simpler.
Albert Einstein
Deterministic systems
Since the Pythagorean attempts to explain the tangible world by means of numerical quantities related to integer numbers, western culture has been characterized by the idea that Nature can be described by mathematics. This idea comes from the explicit or hidden assumption that the world obeys some precise rules. It may appear obvious today, but the systematic application of mathematics to the study of natural phenomena dates from the seventeenth century when Galileo inaugurated modern physics with the publication of his major work Discorsi e Dimostrazioni Matematiche Intorno a Due Nuove Scienze (Discourses and Mathematical Demonstrations Concerning Two New Sciences) in 1638. The fundamental step toward the mathematical formalization of reality was taken by Newton and his mechanics, explained in Philosophiae Naturalis Principia Mathematica (The Mathematical Principles of Natural Philosophy), often referred to as the Principia, published in 1687. This was a very important date not only for the philosophy of physics but also for all the other sciences; this great work can be considered to represent the high point of the scientific revolution, in which science as we know it today was born. From the publication of the Principia to the twentieth century, for a large community of scientists the main goal of physics has been the reduction of natural phenomena to mechanical laws. A natural phenomenon was considered really understood only when it was explained in terms of mechanical movements.
The meaning of the world is the separation of wish and fact.
Kurt Gödel
In the previous chapter we saw that in deterministic dynamical systems there exist well established ways to define and measure the complexity of a temporal evolution, in terms of either the Lyapunov exponents or the Kolmogorov–Sinai entropy. This approach is rather successful in deterministic low-dimensional systems. On the other hand in high-dimensional systems, as well as in low-dimensional cases without a unique characteristic time (as in the example discussed in Section 2.3.3), some interesting features cannot be captured by the Lyapunov exponents or the Kolmogorov–Sinai entropy. In this chapter we will see how an analysis in terms of the finite size Lyapunov exponents (FSLE) and ∊-entropy, defined in Chapter 2, allows the characterization of non-trivial systems in situations far from asymptotic (i.e. finite time and finite observational resolution). In particular, we will discuss the utility of ∊-entropy and FSLE for a pragmatic classification of signals, and the use of chaotic systems in the generation of sequences of (pseudo) random numbers. In addition we will discuss systems containing some randomness.
Characterization of the complexity and system modeling
Typically in experimental investigations, time records of only few observables are available, and the equations of motion are not known. From a conceptual point of view, this case can be treated in the same framework that is used when the evolution laws are known. Indeed, in principle, with the embedding technique one can reconstruct the topological features of the phase space and dynamics (Takens 1981, Abarbanel et al. 1993, Kantz and Schreiber 1997).
To develop the skill of correct thinking is in the first place to learn what you have to disregard. In order to go on, you have to know what to leave out: this is the essence of effective thinking.
Kurt Gödel
Almost all the interesting dynamic problems in science and engineering are characterized by the presence of more than one significant scale, i.e. there is a variety of degrees of freedom with very different time scales. Among numerous important examples we can mention protein folding and climate. While the time scale of vibration of covalent bonds is O(10–15 s), the folding time for proteins may be of the order of seconds. Also in the case of climate, the characteristic times of the involved processes vary from days (for the atmosphere) to O(103 yr) (for the deep ocean and ice shields). In such a situation one says that the system has a multiscale character (E and Engquist 2003).
The necessity of treating the “slow dynamics” in terms of effective equations is both practical (even modern supercomputers are not able to simulate all the relevant scales involved in certain difficult problems) and conceptual: effective equations are able to catch some general features and to reveal dominant ingredients which can remain hidden in the detailed description. The study of multiscale problems has a long history in science (in particular in mathematics): an early important example is the averaging method in mechanics (Arnold 1976).
At any time there is only a thin layer separating what is trivial from what is impossibly difficult. It is in that layer that discoveries are made …
Andrei N. Kolmogorov
An important aspect of the theory of dynamical systems is the formalization and quantitative characterization of the sensitivity to initial conditions. The Lyapunov exponents {λi} are the indicators used to measure the average rate of exponential error growth in a system.
Starting from the idea of Kolmogorov of characterizing dynamical systems by means of entropy-like quantities, following the work by Shannon in information theory, another approach to dynamical systems has been developed in the context of information theory, data compression and algorithmic complexity theory. In particular, the Kolmogorov–Sinai entropy, hks, can be defined and interpreted as a measure of the rate of information production of a system. Since the ability to produce information is tightly linked to the exponential diversification of trajectories, it is not a surprise that a relation exists between hks and {λi}, the Pesin relation.
One has to note that quantities such as {λi} and hks are properly defined only in specific asymptotic limits, that is, very long times and arbitrary accuracy. Since in realistic situations one has to deal with finite accuracy and finite time – as Keynes said, in the long run we shall all be dead – it is important to take into account these limitations. Relaxing the requirement of infinite time, one can investigate the relevance of finite time fluctuations of the “effective” Lyapunov exponent.
Statistical Mechanics has been founded during the XIX-th century by the seminal work of Maxwell, Boltzmann and Gibbs, with the main aim to explain the properties of macroscopic systems from the atomistic point of view. Accordingly, from the very beginning, starting from the Boltzmann's ergodic hypothesis, a basic question was the connection between the dynamics and the statistical properties. This is a rather difficult task and, in spite of the mathematical progress, by Birkhoff and von Neumann, basically ergodic theory had a marginal relevance in the development of the statistical mechanics (at least in the physics community). Partially this was due to a misinterpretation of a result of Fermi and a widely spreaded opinion (based also on the belief of influential scientists as Landau) on the key role of the many degrees of freedom and the practical irrelevance of ergodicity. This point of view found a mathematical support on some results by Khinchin who was able to show that, in systems with a huge number of particles, statistical mechanics works (independently of the ergodicity) just because, on the constant energy surface, the most meaningful physical observables are nearly constant, apart from regions of very small measure,
On the other hand the discovery of the deterministic chaos (from the anticipating work of Poincaré to the contributions, in the second half of the XX-th century, by Chirikov, Hénon, Lorenz and Ruelle, to cite just the most famous) beyond its undoubted relevance for many natural phenomena, showed how the typical statistical features observed in systems with many degrees of freedom, can be generated also by the presence of deterministic chaos in simple systems.
To know that you know when you do know, and know that you do not know when you do not know: that is knowledge.
Confucius
Statistical mechanics was founded by Maxwell, Boltzmann and Gibbs to account for the properties of macroscopic bodies, systems with a very large number of particles, without very precise requirements on the dynamics (except for the assumption of ergodicity).
Since the discovery of deterministic chaos it is now well established that statistical approaches may also be unavoidable and useful, as discussed in Chapter 1, in systems with few degrees of freedom. However, even after many years there is no general agreement among the experts about the fundamental ingredients for the validity of statistical mechanics.
It is quite impossible in a few pages to describe the wide spectrum of positions ranging from the belief of Landau and Khinchin in the main role of the many degrees of freedom and the (almost) complete irrelevance of dynamical properties, in particular ergodicity, to the opinion of those, for example Prigogine and his school, who consider chaos as the basic ingredient.
For almost all practical purposes one can say that the whole subject of statistical mechanics consists in the evaluation of a few suitable quantities (for example, the partition function, free energy, correlation functions). The ergodic problem is often forgotten and the (so-called) Gibbs approach is accepted because “it works.” Such a point of view cannot be satisfactory, at least if one believes that it is not less important to understand the foundation of such a complex issue than to calculate useful quantities.
Since 1990, when the first edition appeared, there has been a significant advance in the development of nonequilibrium systems. The centerpiece of the first edition was the nonequilibrium molecular-dynamics methods and their theoretical analysis, the connections between linear and nonlinear response theory, and the design of the simulation methods. This is now a mature field with only one significant addition, which is the new method for elongational flows.
Chapter 10 in the first edition was called “Towards a thermodynamics of steady states.” This contained an introduction to deterministic chaotic systems. The second edition has the same title for Chapter 10, but the contents are now completely different. The application of the ideas of modern dynamical-systems theory to nonequilibrium systems has grown enormously with all of Chapter 8 devoted to this. However, this still constitutes the barest of introductions with whole books (Gaspard, 1998; Dorfman, 1999; Ott, 2002; and Sprott, 2003) devoted to this theme. The theoretical advances in this area are some of the biggest. The development of methods to study the time evolution using periodic orbits, and the use of periodic orbits to develop SRB measures for nonequilibrium systems are exciting steps forward.
Based on the dynamical properties, Lyapunov exponents in particular, there have been great strides made in the development of the study of fluctuations in nonequilibrium systems.
Linear response theory can be used to design computer simulation algorithms for the calculation of transport coefficients. There are two types of transport coefficients: mechanical and thermal, and we will show how thermal transport coefficients can be calculated using mechanical methods.
In Nature nonequilibrium systems may respond essentially adiabatically, or depending upon circumstances, they may respond approximately isothermally — the quasi-isothermal response. No natural systems can be precisely adiabatic or isothermal. There will always be some transfer of the dissipative heat produced in nonequilibrium systems towards thermal boundaries. This heat may be radiated, convected, or conducted to the boundary reservoir. Provided this heat transfer is slow on a microscopic timescale and provided that the temperature gradients implicit in the transfer process lead to negligible temperature differences on a microscopic length scale, we call the system quasi-isothermal. We assume that quasi-isothermal systems can be modelled microscopically in computer simulations, as isothermal systems.
In view of the robustness of the susceptibilities and equilibrium time-correlation functions to various thermostatting procedures (see Sections 5.2 and 5.4), we expect that quasi-isothermal systems may be modeled using Gaussian or Nosé—Hoover thermostats or enostats. Furthermore, since heating effects are quadratic functions of the thermodynamic forces, the linear response of nonequilibrium systems can always be calculated by analyzing the adiabatic, isothermal, or isoenergetic response.
In nonequilibrium statistical mechanics we seek to model transport processes beginning with an understanding of the motion and interactions of individual atoms or molecules. The laws of classical mechanics govern the motion of atoms and molecules, so in this chapter we begin with a brief description of the mechanics of Newton, Lagrange, and Hamilton. It is often useful to be able to treat constrained mechanical systems. We will use a principle due to Gauss to treat many different types of constraint — from simple bond-length constraints, to constraints on kinetic energy. As we shall see, kinetic energy constraints are useful for constructing various constant temperature ensembles. We will then discuss the Liouville equation and its formal solution. This equation is the central vehicle of nonequilibrium statistical mechanics. We will then need to establish the link between the microscopic dynamics of individual atoms and molecules and the macroscopic hydrodynamical description discussed in the last chapter. We will discuss two procedures for making this connection. The Irving and Kirkwood procedure relates hydrodynamic variables to nonequilibrium ensemble averages of microscopic quantities. A more direct procedure, which we will describe, succeeds in deriving instantaneous expressions for the hydrodynamic field variables.
Newtonian mechanics
Classical mechanics (Goldstein, 1980) is based on Newton's three laws of motion.
Nonequilibrium steady states are fascinating systems to study. Although there are many parallels between these states and equilibrium states, a convincing theoretical description of steady states, particularly far from equilibrium, has yet to be found. Close to equilibrium, linear response theory and linear irreversible thermodynamics provide a relatively complete treatment, (Sections 2.1 to 2.3). However, in systems where local thermodynamic equilibrium has broken down, and thermodynamic properties are not the same local functions of thermodynamic state variables that they are at equilibrium, our understanding is very primitive indeed.
In Section 7.3 we gave a statistical-mechanical description of thermostatted, nonequilibrium steady states far from equilibrium — the transient time-correlation function (TTCF) and Kawasaki formalisms. The transient time-correlation function is the nonlinear analog of the Green—Kubo correlation functions. For linear transport processes the Green—Kubo relations play a role which is analogous to that of the partition function at equilibrium. Like the partition function, Green—Kubo relations are highly nontrivial to evaluate. They do, however, provide an exact starting point from which one can derive exact interrelations between thermodynamic quantities. The Green—Kubo relations also provide a basis for approximate theoretical treatments as well as being used directly in equilibrium molecular-dynamics simulations.
The TTCF and Kawasaki expressions may be used as nonlinear, nonequilibrium partition functions.
Mechanics provides a complete microscopic description of the state of a system. When the equations of motion are combined with initial conditions and boundary conditions, the subsequent time evolution of a classical system can be predicted. In systems with more than just a few degrees of freedom such an exercise is impossible. There is simply no practical way of measuring the initial microscopic state of, for example, a glass of water, at some instant in time. In any case, even if this was possible we could not then solve the equations of motion for a coupled system of 1023 molecules.
In spite of our inability to fully describe the microstate of a glass of water, we are all aware of useful macroscopic descriptions for such systems. Thermodynamics provides a theoretical framework for correlating the equilibrium properties of such systems. If the system is not at equilibrium, fluid mechanics is capable of predicting the macroscopic nonequilibrium behaviour of the system. In order for these macroscopic approaches to be useful, their laws must be supplemented, not only with a specification of the appropriate boundary conditions, but with the values of thermophysical constants such as equation-of-state data and transport coefficients. These values cannot be predicted by macroscopic theory.