To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is my pleasure to welcome this translation of the original French version of my book La fonction d'onde de Bethe into English.
The theory of exactly solvable models, perhaps more than any other subfield of many-body physics, has the distinct advantage of providing solid, reliable and longlasting knowledge. This latter characteristic perhaps explains why my original text, which is by now over three decades old, is still used by members of the scientific community, despite too many mistakes and neglects on my part. This is why the present work of J.-S. Caux is more than a translation, for by his revision he has drawn ‘new from old’ thanks to his style and rigour.
Despite all the developments which the field has known in the intervening period, and which are of course not treated or mentioned here, the fact probably remains that much of what is presented has not been too deprecated in the years since the original version appeared. I am convinced that this translation will bring to a much larger readership an accurate image of the status of the knowledge on these fascinating models at the moment of publication of the original. The initiative and merit belong to the translator, to whom I express my gratitude.
Complexity science is the study of systems with many interdependent components. One of the main concepts is “emergence”: the whole may be greater than the sum of the parts. The objective of this chapter is to put emergence on a firm mathematical foundation in the context of dynamics of large networks. Both stochastic and deterministic dynamics are treated. To minimise technicalities, attention is restricted to dynamics in discrete time, in particular to probabilistic cellular automata and coupled map lattices. The key notion is space-time phases: probability distributions for state as a function of space and time that can arise in systems that have been running for a long time. What emerges from a complex dynamic system is one or more space-time phases. The amount of emergence in a space-time phase is its distance from the set of product distributions over space, using an appropriate metric. A system exhibits strong emergence if it has more than one space-time phase. Strong emergence is the really interesting case.
This chapter is based on MSc or PhD courses given at Warwick in 2006/7, Paris in April 2007, Warwick in Spring 2009 and Autumn 2009, and Brussels in Autumn 2010. It was written up during study leave in 2010/11 at the Université Libre de Bruxelles, to whom I am grateful for hospitality, and finalised in 2012.
The chapter provides an introduction to the theory of space-time phases, via some key examples of complex dynamic system.
I am most grateful to Dayal Strub for transcribing the notes into LaTeX and for preparing the figures.
Economic behavior and market evolution present notoriously difficult complex systems, where physical interacting particles become purpose-pursuing interacting agents, thus providing a kind of a bridge between physics and social sciences.
We systematically develop the mathematical content of the basic theory of financial economics that can be presented rigorously using elementary probability and calculus, that is, the notions of discrete and absolutely continuous random variables, their expectation, notions of independence and of the law of large numbers, basic integration – differentiation, ordinary differential equations and (only occasionally) the method of Lagrange multipliers. We do not assume any knowledge of finance, apart from an elementary understanding of the idea of compound interest, which can be of two types: (i) simple compounding with rate r and a fixed period of time means your capital in this period is multiplied by (1 + r); (ii) continuous compounding with rate r means your capital in a period of time of length t is multiplied by ert.
This chapter is based on several lecture courses for statistics and mathematics students at the University of Warwick and on invited mini-courses presented by the author at various other places. Sections 6.2 and 6.3 are developed from the author's booklet [9]. The chapter is written in a rather concise (but comprehensive) style in attempt to pin down as clear as possible the mathematical relations that govern the laws of financial economics. Numerous heavy volumes are devoted to the detail discussion of the economic content of these mathematical relations, see e.g. [5], [6], [8], [15], [17].
Partial differential equations in complexity science
Partial differential equations (PDEs), that is to say equations relating partial derivatives of functions of more than one variable, are part of the bedrock of most quantitative disciplines and complexity science is no exception. They invariably arise when continuous fields are introduced into models. The classic example is the distribution of heat in a thermal conductor which typically varies continuously with time and with position. The continuous field in this case is T(x,t) the temperature at position x and time t which, in this case, satisfies a linear PDE called the diffusion equation.
In complexity science, PDEs often result from the process of coarse-graining whereby microscopically discrete processes are averaged over small scales to produce an effective continuous description of larger scales. Since much of complexity science focuses on the emergent properties of such coarse-grained descriptions, the analysis of PDEs is a key part of our complexity science toolkit. Some examples of coarse-graining as applied to interacting particle systems were discussed in Chapter 3. Another well-known example is the effective description of traffic flow using a coarse-grained fluid description described by a PDE known as Burgers' equation and variants of it, cf. Chapter 4. We will revisit this application in more detail later. Real fluids also provide a wealth of examples of complex behaviour, all described by the well-known Navier-Stokes equations or variants of them which are famous for being among the most mathematically intractable PDEs of classical physics.
Dynamical systems are represented by mathematical models that describe different phenomena whose state (or instantaneous description) changes over time. Examples are mechanics in physics, population dynamics in biology and chemical kinetics in chemistry. One basic goal of the mathematical theory of dynamical systems is to determine or characterise the long-term behaviour of the system using methods of bifurcation theory for analysing differential equations and iterated mappings. Interestingly, some simple deterministic nonlinear dynamical systems and even piecewise linear systems can exhibit completely unpredictable behaviour, which might seem to be random. This behaviour of systems is known as deterministic chaos.
This chapter aims to introduce some of the techniques used in the modern theory of dynamical systems and the concepts of chaos and strange attractors, and to illustrate a range of applications to problems in the physical, biological and engineering sciences. The material covered includes differential (continuous-time) and difference (discrete-time) equations, first- and higher order linear and nonlinear systems, bifurcation analysis, nonlinear oscillations, perturbation methods, chaotic dynamics, fractal dimensions, and local and global bifurcation.
Readers are expected to know calculus and linear algebra and be familiar with the general concept of differential equations.
Many examples exist of systems made of a large number of comparatively simple elementary constituents which exhibit interesting and surprising collective emergent behaviours. They are encountered in a variety of disciplines ranging from physics to biology and, of course, economics and social sciences. We all experience, for instance, the variety of complex behaviours emerging in social groups. In a similar sense, in biology, the whole spectrum of activities of higher organisms results from the interactions of their cells and, at a different scale, the behaviour of cells from the interactions of their genes and molecular components. Those, in turn, are formed, as all the incredible variety of natural systems, from the spontaneous assembling, in large numbers, of just a few kinds of elementary particles (e.g., protons, electrons).
To stress the contrast between the comparative simplicity of constituents and the complexity of their spontaneous collective behaviour, these systems are sometimes referred to as “complex systems”. They involve a number of interacting elements, often exposed to the effects of chance, so the hypothesis has emerged that their behaviour might be understood, and predicted, in a statistical sense. Such a perspective has been exploited in statistical physics, as much as the later idea of “universality”. That is the discovery that general mathematical laws might govern the collective behaviour of seemingly different systems, irrespective of the minute details of their components, as we look at them at different scales, like in Chinese boxes.
Interacting particle systems (IPS) are probabilistic mathematical models of complex phenomena involving a large number of interrelated components. There are numerous examples within all areas of natural and social sciences, such as traffic flow on motorways or communication networks, opinion dynamics, spread of epidemics or fires, genetic evolution, reaction diffusion systems, crystal surface growth, financial markets, etc. The central question is to understand and predict emergent behaviour on macroscopic scales, as a result of the microscopic dynamics and interactions of individual components. Qualitative changes in this behaviour depending on the system parameters are known as collective phenomena or phase transitions and are of particular interest.
In IPS the components are modelled as particles confined to a lattice or some discrete geometry. But applications are not limited to systems endowed with such a geometry, since continuous degrees of freedom can often be discretized without changing the main features. So depending on the specific case, the particles can represent cars on a motorway, molecules in ionic channels, or prices of asset orders in financial markets (see Chapter 6), to name just a few examples. In principle, such systems often evolve according to well-known laws, but in many cases microscopic details of motion are not fully accessible. Due to the large system size these influences on the dynamics can be approximated as effective random noise with a certain postulated distribution. The actual origin of the noise, which may be related to chaotic motion (see Chapter 2) or thermal interactions (see Chapter 4), is usually ignored.
In this chapter we introduce statistical mechanics in a very general form, and explore how the tools of statistical mechanics can be used to describe complex systems.
To illustrate what statistical mechanics is, let us consider a physical system made of a number of interacting particles. When it is just a single particle in a given potential, it is an easy problem: one can write down the solution (even if one could not calculate everything in closed form). Having two particles is equally easy, as this so-called “two-body problem” can be reduced to two modified one-body problems (one for the centre of mass, other for the relative position). However, a dramatic change occurs when the number of particles is increased to three. The study of the three-body problem started with Newton, Lagrange, Laplace and many others, but the general form of the solution is still unknown. Even relatively recently, in 1993 a new type of periodic solution has been found, where three equal mass particles interacting gravitationally chase each other in a figure-of-eight shaped orbit. This and other systems where the degrees of freedom is low belongs to the subject of dynamical systems, and is discussed in detail in Chapter 2 of this volume. When the number of interacting particles increases to very large numbers, like 1023, which is typical for the number of atoms in a macroscopic object, surprisingly it gets simpler again, as long as we are interested only in aggregate quantities. This is the subject of statistical mechanics.