To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When Bak, Tang, and Wiesenfeld (1987) coined the term Self-Organised Criticality (SOC), it was an explanation for an unexpected observation of scale invariance and, at the same time, a programme of further research. Over the years it developed into a subject area which is concerned mostly with the analysis of computer models that display a form of generic scale invariance. The primacy of the computer model is manifest in the first publication and throughout the history of SOC, which evolved with and revolved around such computer models. That has led to a plethora of computer ‘models’, many of which are not intended to model much except themselves (also Gisiger, 2001), in the hope that they display a certain aspect of SOC in a particularly clear way.
The question whether SOC exists is empty if SOC is merely the title for a certain class of computer models. In the following, the term SOC will therefore be used in its original meaning (Bak et al., 1987), to be assigned to systems
with spatial degrees of freedom [which] naturally evolve into a self-organized critical point.
Such behaviour is to be juxtaposed to the traditional notion of a phase transition, which is the singular, critical point in a phase diagram, where a system experiences a breakdown of symmetry and long-range spatial and, in non-equilibrium, also temporal correlations, generally summarised as (power law) scaling (Widom, 1965a,b; Stanley, 1971).
In this chapter, some important analytical techniques and results are discussed. The first two sections are concerned with mean-field theory, which is routinely applied in SOC, and renormalisation which has had a number of celebrated successes in SOC. As discussed in Sec. 8.3, Dhar (1990a) famously translated the set of rules governing an SOC model into operators, which provides a completely different, namely algebraic perspective. Directed models, discussed in Sec. 8.4, offer a rich basis of exactly solvable models for the analytical methods discussed in this chapter. In the final section, Sec. 8.5, SOC is translated into the language of the theory of interfaces.
It is interesting to review the variety of theoretical languages that SOC models have been cast in. Mean-field theories express SOC models (almost) at the level of updating rules and thus more or less explicitly in terms of a master equation. The same applies for some of the renormalisation group procedures (Vespignani, Zapperi, and Loreto, 1997), although Díaz-Guilera (1992) suggested very early an equation of motion of the local particle density in the form of a Langevin equation. The language of interfaces overlaps with this perspective in the case of Hwa and Kardar's (1989a) surface evolution equations, whereas the absorbing state (AS) approach as well as depinning use a similar formalism but a different physical interpretation – what evolves in the former case is the configuration of the system, while it is the number of charges in the latter.
Self-organised criticality (SOC) is a very lively field that in recent years has branched out into many different areas and contributed immensely to the understanding of critical phenomena in nature. Since its discovery in 1987, it has been one of the most active and influential fields in statistical mechanics. It has found innumerable applications in a large variety of fields, such as physics, chemistry, medicine, sociology, linguistics, to name but a few. A lot of progress has been made over the last 20 years in understanding the phenomenology of SOC and its causes. During this time, many of the original concepts have been revised a number of times, and some, such as complexity and emergence, are still very actively discussed. Nevertheless, some if not most of the original questions remain unanswered. Is SOC ubiquituous? How does it work?
As the field matured and reached a widening audience, the demand for a summary or a commented review grew. When Professor Henrik J. Jensen asked me to write an updated version of his book on self-organised criticality six years ago, it struck me as a great honour, but an equally great challenge. His book is widely regarded as a wonderfully concise, well-written introduction to the field. After more than 24 years since its conception, self-organised criticality is in a process of consolidation, which an up-todate review has to appreciate just as much as the many new results discovered and the new directions explored.
In his review of SOC, Jensen (1998) asked four central questions paraphrased here.
Can SOC be defined as a distinct phenomenon?
Are there systems that display SOC?
What has SOC taught us?
Does SOC have any predictive power?
As discussed in the following, the answers are positive throughout, but slightly different from what was expected ten years ago, when the general consensus was that the failure of SOC experiments and computer models to display the expected featureswasmerely amatter of improving the setup or increasing the system size. Firstly, this is not true: larger and purer systems have, in many cases, not improved the behaviour. Secondly, truly universal behaviour is not expected to be prone to tiny impurities or to display such dramatic finite size corrections. If the conclusion is that this iswhat generally happens in systems studied in SOC over the last twenty years, critical phenomena may not be the most suitable framework to describe them.
Can SOC be defined as a distinct phenomenon?
In the preceding chapters, SOC was regarded as the observation that some systems with spatial degrees of freedom evolve, by a form of self-organisation, to a critical point, where they display intermittent behaviour (avalanching) and (finite size) scaling as known from ordinary phase transitions (Bak et al., 1987, also Ch. 1). This definition makes it clearly distinct from other phenomena, although generic scale invariance has been observed elsewhere.
How does SOC work? What are the necessary and sufficient conditions for the occurrence of SOC? Can the mechanism underlying SOC be put towork in traditional critical phenomena? These questions are at the heart of the study of SOC phenomena. The hope is that an SOC mechanism would not only give insight into the nature of the critical state in SOC and its long-range, long-time correlations, but also provide a procedure to prompt this state in other systems. In the following, SOC is first placed in the context of ordinary critical phenomena, focusing on the question to what extent SOC has been preceded by phenomena with very similar features. The theories of these phenomena can give further insight into the nature of SOC. In the remainder, the two most successful mechanisms are presented, the second of which, the Absorbing State Mechanism (AS mechanism), is the most recent, most promising development. A few other mechanisms are discussed briefly in the last section.
SOC mechanisms generally fall in one of three categories. Firstly, there are those that show that SOC is an instance of generic scale invariance, by showing that SOC models cannot avoid being scale invariant, because of their characteristics, such as bulk conservation and particle transport. The mechanism developed by Hwa and Kardar (1989a), Sec. 9.2, is the most prominent example of this type of explanation. This approach focuses solely on criticality and dismisses any self-organisation.
Most computational physicists try to strike a balance between a number of conflicting objectives. Ideally, a model is quickly implemented, easy to maintain, readily extensible, fast and demands very little memory. A few general rules can help to get closer to that ideal. Well written code that uses proper indentation, comments and symmetries (see for example PUSH and POP below), helps to avoid bugs and improves maintainability. Howmuch tweaking and tuning can be done without spoiling readability and maintainability of the code is a matter of taste and experience. Sometimes an obfuscated implementation of an obscure algorithm makes all the difference. Yet, many optimisations have apparent limits where any reduction of interdependence and any improvement of data capture is compensated by an equal increase in computational complexity and thus runtime. Often a radical rethink is necessary to overcome such an ostensible limit of maximum information per CPU time, as examplified by the Swendsen-Wang algorithm (Swendsen and Wang, 1987) for the Ising Model, which represents a paradigmatic change from the classic Metropolis algorithm (Metropolis, Rosenbluth, Rosenbluth, et al., 1953).
Nevertheless, one should not underestimate the amount of real time as well as CPU time that can be saved by opting for a slightly less powerful code in favour of one that is more stable and correct from the start. On the same account, it usually pays to follow up even a little hunch that something is not working correctly.
In broad terms, the aim of the analysis of a supposed self-organised critical system is to determine whether the phenomenon is merely the sum of independent local events, or is caused by interactions on a global scale, i.e. cooperation, which is signalled by algebraic correlations and non-Gaussian event distributions. Self-organised criticality therefore revolves around scaling and scale invariance, as it describes the asymptotic behaviour of large, complex systems and hints at their universality (Kadanoff, 1990). Numerical and analytical work generally concentrates on the scaling features of a model. Understanding their origin and consequences is fundamental to the analysis as well as to the interpretation of SOC models, beginning at the very motivation of a particular model and permeating down to the level of the presentation of data.
During the last fifteen years or so, the understanding of scaling in complex systems has greatly improved and some standard numerical techniques have been established, which allow the comparison of different models, assumptions and approaches. Yet, there is still noticeable confusion regarding the implications of scaling as well as its quantification.
Most concepts, such as universality and generalised homogeneous functions, are taken from or are motivated by the equilibrium statistical mechanics of phase transitions (Stanley, 1971; Privman et al., 1991), and were first applied to SOC in a systematic manner by Kadanoff et al. (1989). Yet, what appears to be rather natural in the context of equilibrium statistical mechanics, might not be so for complex systems.
This chapter describes a number of (numerical) techniques used to estimate primarily universal quantities, such as exponents, moment ratios and scaling functions. The methods are applied during post-processing, i.e. after a numerical simulation, such as the OFC Model, Appendix A, has terminated. Many methods are linked directly to the scaling arguments presented in Ch. 2, i.e. they either probe for the presence of scaling or derive properties assuming scaling.
A time series is the most useful representation of the result of a numerical simulation, because it gives insight into the temporal evolution of the model and provides a natural way to determine the variance of the various observables reliably. Most of the analysis focuses on the stationary state of the model, where the statistics of one instance of the model with one particular initial state is virtually indistinguishable from that with another initial state. The end of the transient can be determined by comparing two or more independent runs, or by comparing one run to exactly known results (such as the average avalanche size) or to results of much later times. The transient can be regarded as past as soon as the observables are within one standard deviation. It pays to be generous with the transient, in particular when higher moments or complex observables are considered.
In the stationary state, the ensemble average (taking, at equal times, a sample across a large number of realisations of the model) is strictly time independent.
Many different models have been developed in order to study particular features of SOC, such as 1/f noise, non-conservation and anisotropy. In Part II, some of the more important models are introduced and their general properties discussed. At the beginning of each section, the definition and characteristics of each model are catalogued in a box and the exponents listed in a table. Each section is essentially independent, discussing a model in its own right for its particular qualities. Nevertheless, relations to other models are emphasised and a minimal set of common observables (see Sec. 1.3), in particular exponents (Ch. 2), is discussed for each of them.
The attempt to tabulate exhaustively all numerical estimates for exponents of the various models is futile; it is practically impossible to find all published exponents for a model. It is similarly fruitless to draw a clear line between genuine estimates and exponents derived from others using (assumed) scaling relations. Wherever possible, exponents are only listed if the sole underlying assumption is simple scaling as stated in the caption. The tables of exponents therefore serve only to illustrate the variety and sometimes the disparity of results. The exponents are listed in historical order, which often means that the data towards the bottom of the tables are based on more extensive numerics and are thus more reliable. The tables should enable the reader to judge whether a given model displays systematic, robust scaling behaviour or not.
Both of the following models incorporate a form of stochasticity in the relaxation mechanism. In the MANNA Model particles topple to sites randomly chosen among nearest neighbours and in the OSLO Model the local critical slopes are chosen at random. They both display robust scaling and belong to the same enormous universality class, which contains two large classes of ordinary (tuned), non-equilibrium critical phenomena: directed percolation with conserved field (C-DP) and the quenched Edwards-Wilkinson equation (qEW equation). The former is paradigmatically represented by the MANNA Model, the latter by the OSLO Model.
Both models are generally considered to be Abelian, even when they strictly are not. In their original versions, the relaxation of the MANNA Model is non-Abelian and so is the driving in the OSLO Model. This can be perceived as a shortcoming, not only because the BTW Model has been understood in much greater detail by studying its Abelian variant, but also because of the simplification of their implementation, as the final configurations become independent of the order of updates. Nowadays, the MANNA Model and, where the issue arises, the OSLO Model are studied in their Abelian variant. The MANNA Model is currently probably the most intensely studied model of SOC.
In their Abelian form, both models can be described in terms of stochastic equations of motion (Sec. 6.2.1.3 and Sec. 6.3.4.2). These look very different for the two models.
The models discussed in this chapter resemble some of the phenomenology of (naïve) sandpiles. As discussed earlier (Sec. 3.1), it is clear that the physics behind a real relaxing sandpile is much richer than can be captured by the following ‘sandpile models’. Yet, it would be unjust to count that as a shortcoming, because these models were never intended to describe all the physics of a sandpile. The situation is similar to that of the ‘Forest Fire Model’ which is only vaguely reminiscent of forest fires and was, explicitly, not intended to model them. The names of these models should not be taken literally, they merely serve as a sometimes humorous aide-memoire for their setup, similar to Thomson's Plum Pudding Model which is certainly not a model of a plum pudding.
In the following section, the iconic Bak-Tang-Wiesenfeld Model and its hugely important derivative, the Abelian Sandpile Model, are discussed in detail. This is followed by the ZHANG Model, which was intended as a continuous version of the BAK-TANG-WIESENFELD Model. Their common feature is a deterministic, rather than stochastic, relaxation rule. Although a lot of analytical and numerical progress has been made for all three models, their status quo, in particular to what extent they display true scale invariance, remains inconclusive.
The Bak-Tang-Wiesenfeld Model
The publication of the Bak-Tang-Wiesenfeld (BTW) Model (see Box 4.1) (Bak et al., 1987) marks the beginning of the entire field.