To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Stable density stratification can have a strong effect on fluid flows. For example, a stably-stratified fluid can support the propagation of internal waves. Also, at large enough horizontal scales, flow in a stably-stratified fluid will not have enough kinetic energy to overcome the potential energy needed to overturn; therefore flows at this horizontal scale and larger cannot overturn, greatly constraining the types of motions possible. Both of these effects were observed in laboratory experiments of wakes in stably-stratified fluids (see, e.g., (Lin and Pao, 1979)). In the wake experiments, generally the flow in the near wake of the source, e.g., a towed sphere or a towed grid, consisted of three-dimensional turbulence, little affected by the stable stratification. As the flow decayed, however, the effects of stratification became continually more important. After a few buoyancy periods, when the effects of stable stratification started to dominate, the flow had changed dramatically, and consisted of both internal waves and quasi-horizontal motions. Following Lilly (1983), we will call such motions, consisting of both internal waves and quasi-horizontal motions due to the domination of stable stratification, as “stratified turbulence”. It has become clear that such flows, while being strongly constrained by the stable stratification, have many of the features of turbulence, including being stochastic, strongly nonlinear, strongly dispersive, and strongly dissipative.
A primary interest in stratified turbulence is how energy in such flows, strongly affected by stable stratification, is still effectively cascaded down to smaller scales and into three-dimensional turbulence, where it is ultimately dissipated.
Ancient depictions of fluids, going back to the Minoans, envisaged waves and moving streams. They missed what we would call vortices and turbulence. The first artist to depict the rotational properties of fluids, vortical motion and turbulent flows was da Vinci (1506 to 1510). He would recognize the term vortical motion as it comes from the Latin vortere or vertere: to turn, meaning that vorticity is where a gas or liquid is rapidly turning or spiraling. Mathematically, one represents this effect as twists in the velocity derivative, that is the curl or the anti-symmetric component of the velocity gradient tensor. If the velocity field is u, then for the vorticity is ω = ∇ × u.
The aspect of turbulence which this chapter will focus upon is the structure, dynamics and evolution of vorticity in idealized turbulence – either the products of homogeneous, isotropic, statistically stationary states in forced, periodic simulations, or flows using idealized initial conditions designed to let us understand those states. The isotropic state is often viewed as a tangle of vorticity (at least when the amplitudes are large), an example of which is given in Fig. 2.1. This visualization shows isosurfaces of the magnitude of the vorticity, and similar techniques have been discussed before (see e.g. Pullin and Saffman, 1998; Ishihara et al., 2009; Tsinober, 2009). The goal of this chapter is to relate these graphics to basic relations between the vorticity and strain, to how this subject has evolved to using vorticity as a measure of regularity, then focus on the structure and dynamics of vorticity in turbulence, in experiments and numerical investigations, before considering theoretical explanations. Our discussions will focus upon three-dimensional turbulence.
The wide ranges of length and velocity scales that occur in turbulent flows make them both interesting and difficult to understand. The length scale range is widest in high Reynolds number turbulent wall flows where the dominant contribution of small scales to the stress and energy very close to the wall gives way to dominance of larger scales with increasing distance away from the wall. Intrinsic length scales are defined statistically by the two-point spatial correlation function, the power spectral density, conditional averages, proper orthogonal decomposition, wavelet analysis and the like. Before two- and three-dimensional turbulence data became available from PIV and DNS, it was necessary to attempt to infer the structure of the eddies that constitute the flow from these statistical quantities or from qualitative flow visualization. Currently, PIV and DNS enable quantitative, direct observation of structure and measurement of the scales associated with instantaneous flow patterns. The purpose of this chapter is to review the behavior of statistically-based scales in wall turbulence, to summarize our current understanding of the geometry and scales of various coherent structures and to discuss the relationship between instantaneous structures of various scales and statistical measures.
Introduction
Wall-bounded turbulent flows are pertinent in a great number of fields, including geophysics, biology, and most importantly engineering and energy technologies where skin friction, heat and mass transfer, flow-generated noise, boundary layer development and turbulence structure are often critical for system performance and environmental impact.
Fully developed turbulence is a phenomenon involving huge numbers of degrees of dynamical freedom. The motions of a turbulent fluid are sensitive to small differences in flow conditions, so though the latter are seemingly identical they may give rise to large differences in the motions.1 It is difficult to predict them in full detail.
This difficulty is similar, in a sense, to the one we face in treating systems consisting of an Avogadro number of molecules, in which it is impossible to predict the motions of them all. It is known, however, that certain relations, such as the ideal gas laws, between a few number of variables such as pressure, volume, and temperature are insensitive to differences in the motions, shapes, collision processes, etc. of the molecules.
Given this, it is natural to ask whether there is any such relation in turbulence. In this regard, we recall that fluid motion is determined by flow conditions, such as boundary conditions and forcing. It is unlikely that the motion would be insensitive to the difference in these conditions, especially at large scales. It is also tempting, however, to assume that, in the statistics at sufficiently small scales in fully developed turbulence at sufficiently high Reynolds number, and away from the flow boundaries, there exist certain kinds of relation which are universal in the sense that they are insensitive to the detail of large-scale flow conditions. In fact, this idea underlies Kolmogorov's theory (Kolmogorov, 1941a, hereafter referred as K41), and has been at the heart of many modern studies of turbulence. Hereafter, universality in this sense is referred to as universality in the sense of K41
For good practical reasons, most experimental observations of turbulent flow are made at fixed points x in space at time t and most numerical calculations are performed on a fixed spatial grid and at fixed times. On the other hand, it is possible to describe the flow in terms of the velocity and concentration (and other quantities of interest) at a point moving with the flow. This is known as a Lagrangian description of the flow ((Monin and Yaglom, 1971)). The position of this point x+(t; x0, t0) is a function of time and of some initial point x0 and time t0 at which it was identified or “labelled”. Its velocity is the velocity of the fluid where it happens to be at time t, u+ (t; x0, t0) = u(x+(t), t). We will use the superscript (+) to denote Lagrangian quantities, and quantities after the semi-colon are independent parameters. We refer to a point moving in this way as a fluid particle.
Flow statistics obtained at fixed points and times are known as Eulerian statistics. On the other hand, statistics obtained at specific times by sampling over trajectories, which at some reference times passed through fixed points, are known as Lagrangian statistics. For example, the mean displacement at time t of those particles that passed through the point x0 at time t0 is just 〈x+ (t; x0, t0) − x0〉. In both cases, the measurement time t can be earlier or later than the reference time.
Vorticity fields that are not overly damped develop extremely complex spatial structures exhibiting a wide range of scales. These structures wax and wane in coherence; some are intense and most of them weak; and they interact nonlinearly. Their evolution is strongly influenced by the presence of boundaries, shear, rotation, stratification and magnetic fields. We label the multitude of phenomena associated with these fields as turbulence and the challenge of predicting the statistical behaviour of such flows has engaged some of the finest minds in twentieth century science.
The progress has been famously slow. This slowness is in part because of the bewildering variety of turbulent flows, from the ideal laboratory creations on a small scale to heterogeneous flows on the dazzling scale of cosmos. Philip Saffman (Structure and Mechanisms of Turbulence II, Lecture Notes in Physics 76, Springer, 1978, p. 273) commented: “… we should not altogether neglect the possibility that there is no such thing as ‘turbulence’. That is to say, it is not meaningful to talk about the properties of a turbulent flow independently of the physical situation in which it arises. In searching for a theory of turbulence, perhaps we are looking for a chimera … Perhaps there is no ‘real turbulence problem’, but a large number of turbulent flows and our problem is the self-imposed and possibly impossible task of fitting many phenomena into the Procrustean bed of a universal turbulence theory.”
Superfluids can flow without friction and display two-fluid phenomena. These two properties, which have quantum mechanical origins, lie outside common experience with classical fluids. The subject of superfluids has thus generally been relegated to the backwaters of mainstream fluid dynamics. The focus of low-temperature physicists has been the microscopic structure of superfluids, which does not naturally invite the attention of experts on classical fluids. However, perhaps amazingly, there exists a state of superfluid flow that is similar to classical turbulence, qualitatively and quantitatively, in which superfluids are endowed with quasiclassical properties such as effective friction and finite heat conductivity. This state is called superfluid or quantum turbulence (QT) [Feynman (1955); Vinen & Niemela (2002); Skrbek (2004); Skrbek & Sreenivasan (2012)]. Although QT differs from classical turbulence in several important respects, many of its properties can often be understood in terms of the existing phenomenology of its classical counterpart. We can also learn new physics about classical turbulence by studying QT. Our goal in this article is to explore this interrelation. Instead of expanding the scope of the article broadly and compromising on details, we will focus on one important aspect: the physics that is common between decaying vortex line density in QT and the decay of three-dimensional (3D) turbulence that is nearly homogeneous and isotropic turbulence (HIT), which has been a cornerstone of many theoretical and modeling advances in hydrodynamic turbulence. A more comprehensive discussion can be found in a recent review by Skrbek & Sreenivasan (2012).
Self-organized criticality (SOC) is based upon the idea that complex behavior can develop spontaneously in certain multi-body systems whose dynamics vary abruptly. This book is a clear and concise introduction to the field of self-organized criticality, and contains an overview of the main research results. The author begins with an examination of what is meant by SOC, and the systems in which it can occur. He then presents and analyzes computer models to describe a number of systems, and he explains the different mathematical formalisms developed to understand SOC. The final chapter assesses the impact of this field of study, and highlights some key areas of new research. The author assumes no previous knowledge of the field, and the book contains several exercises. It will be ideal as a textbook for graduate students taking physics, engineering, or mathematical biology courses in nonlinear science or complexity.
The availability of large data sets has allowed researchers to uncover complex properties such as large-scale fluctuations and heterogeneities in many networks, leading to the breakdown of standard theoretical frameworks and models. Until recently these systems were considered as haphazard sets of points and connections. Recent advances have generated a vigorous research effort in understanding the effect of complex connectivity patterns on dynamical phenomena. This book presents a comprehensive account of these effects. A vast number of systems, from the brain to ecosystems, power grids and the internet, can be represented as large complex networks. This book will interest graduate students and researchers in many disciplines, from physics and statistical mechanics to mathematical biology and information science. Its modular approach allows readers to readily access the sections of most interest to them, and complicated maths is avoided so the text can be easily followed by non-experts in the subject.
Giving a detailed overview of the subject, this book takes in the results and methods that have arisen since the term 'self-organised criticality' was coined twenty years ago. Providing an overview of numerical and analytical methods, from their theoretical foundation to the actual application and implementation, the book is an easy access point to important results and sophisticated methods. Starting with the famous Bak-Tang-Wiesenfeld sandpile, ten key models are carefully defined, together with their results and applications. Comprehensive tables of numerical results are collected in one volume for the first time, making the information readily accessible to readers. Written for graduate students and practising researchers in a range of disciplines, from physics and mathematics to biology, sociology, finance, medicine and engineering, the book gives a practical, hands-on approach throughout. Methods and results are applied in ways that will relate to the reader's own research.
A number of experiments and observations have been undertaken to test for SOC in the ‘real world’. Ultimately, these observations motivate the research based on analytical and numerical tools, although the latter provide the clearest evidence for SOC whereas experimental evidence is comparatively ambiguous. What evidence suffices to call a system self-organised critical? One might be inclined to say scale invariance without tuning, but as discussed in Sec. 9.4, the class of such systems might be too large and comprise phenomena that traditionally are regarded as distinct from criticality, such as, for example, diffusion.
In most cases, systems suspected to be self-organised critical display a form of scaling and a form of avalanching, suggesting a separation of time scales. Because of the early link to 1/f noise (Sec. 1.3.2), some early publications regard this as sufficient evidence for SOC. At the other end of the spectrum are systems that closely resemble those that are studied numerically and whose scaling behaviour is not too far from that observed in numerical studies. Yet it remains debatable whether any numerical model is a faithful representation of any experiment or at least incorporates the relevant interactions.
At first sight, solid experimental evidence for scaling or even universality is sparse among the many publications that suggest links to SOC. This result is even more sobering as evidence for SOC is heavily biased – there are very few publications (e.g. Jaeger, Liu, and Nagel, 1989; Kirchner and Weil, 1998) on failed attempts to identify SOC where it was suspected.
When Bak, Tang, and Wiesenfeld (1987) coined the term Self-Organised Criticality (SOC), it was an explanation for an unexpected observation of scale invariance and, at the same time, a programme of further research. Over the years it developed into a subject area which is concerned mostly with the analysis of computer models that display a form of generic scale invariance. The primacy of the computer model is manifest in the first publication and throughout the history of SOC, which evolved with and revolved around such computer models. That has led to a plethora of computer ‘models’, many of which are not intended to model much except themselves (also Gisiger, 2001), in the hope that they display a certain aspect of SOC in a particularly clear way.
The question whether SOC exists is empty if SOC is merely the title for a certain class of computer models. In the following, the term SOC will therefore be used in its original meaning (Bak et al., 1987), to be assigned to systems
with spatial degrees of freedom [which] naturally evolve into a self-organized critical point.
Such behaviour is to be juxtaposed to the traditional notion of a phase transition, which is the singular, critical point in a phase diagram, where a system experiences a breakdown of symmetry and long-range spatial and, in non-equilibrium, also temporal correlations, generally summarised as (power law) scaling (Widom, 1965a,b; Stanley, 1971).
In this chapter, some important analytical techniques and results are discussed. The first two sections are concerned with mean-field theory, which is routinely applied in SOC, and renormalisation which has had a number of celebrated successes in SOC. As discussed in Sec. 8.3, Dhar (1990a) famously translated the set of rules governing an SOC model into operators, which provides a completely different, namely algebraic perspective. Directed models, discussed in Sec. 8.4, offer a rich basis of exactly solvable models for the analytical methods discussed in this chapter. In the final section, Sec. 8.5, SOC is translated into the language of the theory of interfaces.
It is interesting to review the variety of theoretical languages that SOC models have been cast in. Mean-field theories express SOC models (almost) at the level of updating rules and thus more or less explicitly in terms of a master equation. The same applies for some of the renormalisation group procedures (Vespignani, Zapperi, and Loreto, 1997), although Díaz-Guilera (1992) suggested very early an equation of motion of the local particle density in the form of a Langevin equation. The language of interfaces overlaps with this perspective in the case of Hwa and Kardar's (1989a) surface evolution equations, whereas the absorbing state (AS) approach as well as depinning use a similar formalism but a different physical interpretation – what evolves in the former case is the configuration of the system, while it is the number of charges in the latter.
Self-organised criticality (SOC) is a very lively field that in recent years has branched out into many different areas and contributed immensely to the understanding of critical phenomena in nature. Since its discovery in 1987, it has been one of the most active and influential fields in statistical mechanics. It has found innumerable applications in a large variety of fields, such as physics, chemistry, medicine, sociology, linguistics, to name but a few. A lot of progress has been made over the last 20 years in understanding the phenomenology of SOC and its causes. During this time, many of the original concepts have been revised a number of times, and some, such as complexity and emergence, are still very actively discussed. Nevertheless, some if not most of the original questions remain unanswered. Is SOC ubiquituous? How does it work?
As the field matured and reached a widening audience, the demand for a summary or a commented review grew. When Professor Henrik J. Jensen asked me to write an updated version of his book on self-organised criticality six years ago, it struck me as a great honour, but an equally great challenge. His book is widely regarded as a wonderfully concise, well-written introduction to the field. After more than 24 years since its conception, self-organised criticality is in a process of consolidation, which an up-todate review has to appreciate just as much as the many new results discovered and the new directions explored.
In his review of SOC, Jensen (1998) asked four central questions paraphrased here.
Can SOC be defined as a distinct phenomenon?
Are there systems that display SOC?
What has SOC taught us?
Does SOC have any predictive power?
As discussed in the following, the answers are positive throughout, but slightly different from what was expected ten years ago, when the general consensus was that the failure of SOC experiments and computer models to display the expected featureswasmerely amatter of improving the setup or increasing the system size. Firstly, this is not true: larger and purer systems have, in many cases, not improved the behaviour. Secondly, truly universal behaviour is not expected to be prone to tiny impurities or to display such dramatic finite size corrections. If the conclusion is that this iswhat generally happens in systems studied in SOC over the last twenty years, critical phenomena may not be the most suitable framework to describe them.
Can SOC be defined as a distinct phenomenon?
In the preceding chapters, SOC was regarded as the observation that some systems with spatial degrees of freedom evolve, by a form of self-organisation, to a critical point, where they display intermittent behaviour (avalanching) and (finite size) scaling as known from ordinary phase transitions (Bak et al., 1987, also Ch. 1). This definition makes it clearly distinct from other phenomena, although generic scale invariance has been observed elsewhere.
How does SOC work? What are the necessary and sufficient conditions for the occurrence of SOC? Can the mechanism underlying SOC be put towork in traditional critical phenomena? These questions are at the heart of the study of SOC phenomena. The hope is that an SOC mechanism would not only give insight into the nature of the critical state in SOC and its long-range, long-time correlations, but also provide a procedure to prompt this state in other systems. In the following, SOC is first placed in the context of ordinary critical phenomena, focusing on the question to what extent SOC has been preceded by phenomena with very similar features. The theories of these phenomena can give further insight into the nature of SOC. In the remainder, the two most successful mechanisms are presented, the second of which, the Absorbing State Mechanism (AS mechanism), is the most recent, most promising development. A few other mechanisms are discussed briefly in the last section.
SOC mechanisms generally fall in one of three categories. Firstly, there are those that show that SOC is an instance of generic scale invariance, by showing that SOC models cannot avoid being scale invariant, because of their characteristics, such as bulk conservation and particle transport. The mechanism developed by Hwa and Kardar (1989a), Sec. 9.2, is the most prominent example of this type of explanation. This approach focuses solely on criticality and dismisses any self-organisation.