To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The main role of many networks is to provide the physical substrate to the flow of some physical quantity, data, or information. In particular, this is the case for the transportation and technological infrastructures upon which the everyday functioning of our society relies. In this context, congestion phenomena, failures, and breakdown avalanches can have dramatic effects, as witnessed in major blackouts and transportation breakdowns. Also, in technological networks, if the single element does not have the capacity to cope with the amount of data to be handled, the network as a whole might be unable to perform efficiently. The understanding of congestion and failure avalanches is therefore a crucial research question when developing strategies aimed at network optimization and protection.
Clearly the study of network traffic and the emergence of congestion and large scale failures cannot neglect the specific nature of the system. The huge literature addressing these questions is therefore domain oriented and goes beyond the scope of this book. On the other hand, large-scale congestion and failure avalanches can also be seen as the emergence of collective phenomena and, as such, studied through simple models with the aim of abstracting the general features due to the basic topology of the underlying network. This chapter is devoted to an overview of results concerning traffic, congestions, and avalanches and is more focused on the general and ubiquitous features of these phenomena than on system-specific properties.
Networks have long been recognized as having a central role in biological sciences. They are the natural underlying structures for the description of a wide array of biological processes across scales varying from molecular processes to species interactions. Especially at smaller scales, most genes and proteins do not have a function on their own; rather they acquire a specific role through the complex web of interactions with other proteins and genes. In recent years this perspective, largely fostered by the recent abundance of high-throughput experiments and the availability of entire genome sequences and gene co-expression patterns, has led to a stream of activities focusing on the architecture of biological networks.
The abundance of large-scale data sets on biological networks has revealed that their topological properties in many cases depart considerably from the random homogeneous paradigm. This evidence has spurred intense research activity aimed at understanding the origin of these properties as well as their biological relevance. The problem amounts to linking structure and function, in most cases, by understanding the interplay of topology and dynamical processes defined on the network. Empirical observations of heterogeneities have also revamped several areas and landmark problems such as Boolean network models and the issue of stability and complexity in ecosystems.
While concepts and methods of complex network analysis are nowadays standard tools in network biology, it is clear that a discussion of their relevance and roles has to be critically examined by taking into account the specific nature of the biological problem.
The present chapter is intended to provide a short introduction to the theory and modeling of equilibrium and non-equilibrium processes on networks and to define the basic modeling approaches and techniques used in the theory of dynamical processes. In particular, we define the master equation formalism and distinguish between equilibrium and non-equilibrium phenomena. Unfortunately, while the master equation allows for important conceptual distinction and categorization, its complete solution is hardly achievable even for very simple dynamical processes. For this reason we introduce the reader to techniques such as mean-field and continuous deterministic approximations, which usually represent viable approaches to understand basic features of the process under study. We also discuss Monte Carlo and agent-based modeling approaches that are generally implemented in large-scale numerical simulation methods.
These different theoretical methods help to define a general framework to demonstrate how the microscopic interactions between the elements of the system lead to cooperative phenomena and emergent properties of the dynamical processes. This strategy, going from microscopic interaction to emergent collective phenomena, has its roots in statistical physics methodology and population dynamics, and is currently viewed as a general paradigm to bridge the gap between the local and the large-scale properties of complex systems. It is important to stress, however, that the following material is a greatly abbreviated presentation of a huge field of research and by necessity just scratches the surface of the statistical theory of dynamical processes.
The study of collective behavior in social systems has recently witnessed an increasing number of works relying on computational and agent-based models. These models use very simplistic schemes for the micro-processes of social influence and are more interested in the emerging macro-level social behavior. Agent-based models for social phenomena are very similar in spirit to the statistical physics approach. The agents update their internal state through an interaction with their neighbors and the emergent macroscopic behavior of the system is the result of a large number of these interactions.
The behavior of all of these models has been extensively studied for agents located on the nodes of regular lattices or possessing the ability to interact homogeneously with each other. But as described in Chapter 2, interactions between individuals and the structure of social systems can be generally represented by complex networks whose topologies exhibit many non-trivial properties such as small-world, high clustering, and strong heterogeneity of the connectivity pattern. Attention has therefore recently shifted to the study of the effect of more realistic network structures on the dynamical evolution and emergence of social phenomena and organization. In this chapter, we review the results obtained in four prototypical models for social interactions and show the effect of the network topology on the emergence of collective behavior.
In many of the systems and models in which stochastic resonance has been observed, the essential nonlinearity is effectively a single threshold. Usually SR occurs when an entirely subthreshold signal is subjected to additive noise, which allows threshold crossings to occur that otherwise would not have. In such systems, it is generally thought that when the input signal is suprathreshold, then the addition of noise will not have any beneficial effect on the system output.
However, the 1999 discovery of a novel form of SR in simple threshold-based systems showed that this is not the case. This phenomenon is known as suprathreshold stochastic resonance, and occurs in arrays of identical threshold devices subject to independent additive noise. In such arrays, SR can occur regardless of whether the signal is entirely subthreshold or not, hence the name suprathreshold SR. The SSR effect is quite general, and is not restricted to any particular type of signal or noise distribution.
This chapter reviews the early theoretical work on SSR. Recent theoretical extensions are also presented, as well as numerical analysis of previously unstudied input and noise signals, a new technique for calculating the mutual information by integration, and an investigation of a number of channel capacity questions for SSR. Finally, this chapter shows how SSR can be interpreted as a stochastic quantization scheme.
Introduction
Suprathreshold stochastic resonance (SSR) is a form of stochastic resonance (SR) that occurs in arrays of identical threshold devices. A schematic model of the system is shown in Fig. 4.1, and is described in detail in Section 4.3.
The aim of this chapter is to find asymptotic large N approximations to the mean square error distortion for the suprathreshold stochastic resonance model. In particular, we are interested in how the distortion varies with noise intensity and how it scales with the number of threshold devices. That is, does the distortion become asymptotically small for large N?
Introduction
Chapter 6 developed the idea of treating the SSR model as a lossy source coding or quantization model. We saw that such a treatment requires specification of reproduction points corresponding to each of the N + 1 discrete output states. Once specified, an approximate reconstruction of the input signal can be made from a decoding, and the average error between this approximation and the original signal subsequently measured by the mean square error (MSE) distortion. We saw also in Chapter 5 that asymptotic approximations to the output probability mass function, Py(n), output entropy, average conditional output entropy, and mutual information can be found if the number of threshold devices, N, in the SSR model is allowed to become very large. The aim of this chapter is to again allow N to become very large, and develop asymptotic approximations to the MSE distortion for the cases of optimal linear and optimal nonlinear decodings of the SSR model.
Chapter structure
This chapter has three main sections. We begin in Section 7.2 by letting N become large in the formulas derived in Chapter 6 and analyzing the result. Next, Section 7.3 takes the same approach from the estimation theory perspective.
We begin by briefly outlining the background and motivation for this book, before giving an overview of each chapter, and pointing out the most significant questions addressed.
Although the methodology used is firmly within the fields of signal processing and mathematical physics, the motivation is interdisciplinary in nature.
The initial open questions that inspired this direction were:
(i) How might neurons make use of a phenomenon known as stochastic resonance?
(ii) How might a path towards engineering applications inspired by these studies be initiated?
Stochastic resonance and sensory neural coding
Stochastic resonance (SR) is a counter-intuitive phenomenon where the presence of noise in a nonlinear system is essential for optimal system performance. It is not a technique. Instead, it is an effect that might be observed and potentially exploited or induced. It has been observed to occur in many systems, including in both neurons and electronic circuits.
A motivating idea is that since we know the brain is far better at many tasks compared to electronic and computing devices, then maybe we can learn something from the brain. If we can ultimately better understand the possible exploitation of SR in the brain and nervous system, we may also be able to improve aspects of electronic systems.
Although it is important to have an overall vision, in practical terms it is necessary to consider a concrete starting point. This book is particularly focused on an exciting new development in the field of SR, known as suprathreshold stochastic resonance (SSR) (Stocks 2000c). Suprathreshold stochastic resonance occurs in a parallel array of simple threshold devices.
By
Bart Kosko, Department of Electrical Engineering, Signal and Image Processing Institute, University of Southern California,
Sergey M. Bezrukov, National Institutes of Health (NIH), Bethesda, Washington DC, USA
Due to the multidisciplinary nature of stochastic resonance the Foreword begins with a commentary from Bart Kosko representing the engineering field and ends with comments from Sergey M. Bezrukov representing the biophysics field. Both are distinguished researchers in the area of stochastic resonance and together they bring in a wider perspective that is demanded by the nature of the topic.
The authors have produced a breakthrough treatise with their new book Stochastic Resonance. The work synthesizes and extends several threads of noise-benefit research that have appeared in recent years in the growing literature on stochastic resonance. It carefully explores how a wide variety of noise types can often improve several types of nonlinear signal processing and communication. Readers from diverse backgrounds will find the book accessible because the authors have patiently argued their case for nonlinear noise benefits using only basic tools from probability and matrix algebra.
Stochastic Resonance also offers a much-needed treatment of the topic from an engineering perspective. The historical roots of stochastic resonance lie in physics and neural modelling. The authors reflect this history in their extensive discussion of stochastic resonance in neural networks. But they have gone further and now present the exposition in terms of modern information theory and statistical signal processing. This common technical language should help promote a wide range of stochastic resonance applications across engineering and scientific disciplines. The result is an important scholarly work that substantially advances the state of the art.
Engineered systems usually require finding the right tradeoff between cost and performance. Communications systems are no exception, and much theoretical work has been undertaken to find the limits of achievable performance for the transmission of information. For example, Shannon's celebrated channel capacity formula and coding theorems say that there is an upper limit on the average amount of information that can be transmitted in a channel for error-free communication. This limit can be increased if the power of the signal is increased, or the bandwidth in the channel is increased. However, nothing comes for free, and increasing either power or bandwidth can be expensive; hence there is a tradeoff between cost and performance in such a communications system – performance (measured by bit rates) can be increased by increasing the cost (power or bandwidth). This chapter discusses several problems related to the tradeoff between cost and performance in the SSR model. We are interested in the SSR model as a channel model, from an energy efficient neural coding point of view, as well as the lossy source coding model, where there is a tradeoff between rate and distortion.
Introduction
Chapter 8 introduces an extension to the suprathreshold stochastic resonance (SSR) model by allowing all thresholds to vary independently, instead of all having the same value. This chapter further extends the SSR model by introducing an energy constraint into the optimal stochastic quantization problem. We also examine the tradeoff between rate and distortion, when the SSR model is considered as a stochastic quantizer.
In this chapter we illustrate the relevance of stochastic resonance to auditory neural coding. This relates to natural auditory coding and also to coding by cochlear implant devices. Cochlear implants restore partial hearing to profoundly deaf people by trying to mimic, using direct electrical stimulation of the auditory nerve, the effect of acoustic stimulation.
Introduction
It is not surprising that the study of auditory neural coding involves stochastic phenomena, due to one simple fact – signal transduction in the ear is naturally a very noisy process. The noise arises from a number of sources but principally it is the Brownian motion of the stereocilia (hairs) of the inner hair cells that has the largest effect (Hudspeth 1989). Although the Brownian motion of the stereocilia appears small – typically causing displacements at the tips of the stereocilia of 2–3 nm (Denk and Webb 1992) – this is not small compared with the deflection of the tips at the threshold of hearing. Evidence exists that suggests, at threshold, the tip displacement is of the order of 0.3 nm (Sellick et al. 1982, Crawford and Fettiplace 1986), thus yielding a signal-to-noise ratio (SNR) of about –20 dB at threshold. Of course, under normal operating conditions the SNR will be greater than this, but, at the neural level, is typically of the order of 0 dB (DeWeese and Bialek 1995).
Given the level of noise and the fact that neurons are highly nonlinear makes the auditory system a prime candidate for observing stochastic resonance (SR) type behaviour.
Quantization of a signal or data source refers to the division or classification of that source into a discrete number of categories or states. It occurs, for example, when analogue electronic signals are converted into digital signals, or when data are binned into histograms. By definition, quantization is a lossy process, which compresses data into a more compact representation, so that the number of states in a quantizer's output is usually far fewer than the number of possible input values.
Most existing theory on the performance and design of quantization schemes specifies only deterministic rules governing how data are quantized. By contrast, stochastic quantization is a term intended to pertain to quantization where the rules governing the assignment of input values to output states are stochastic rather than deterministic. One form of stochastic quantization that has already been widely studied is a signal processing technique called dithering. However, the stochastic aspect of dithering is usually restricted so that it is equivalent to adding random noise to a signal prior to quantization. The term stochastic quantization is intended to be far more general, and applies to the situation where the rules of the quantization process are stochastic.
The inspiration for this study comes from a phenomenon known as stochastic resonance, which is said to occur when the presence of noise in a system provides a better performance than the absence of noise. Specifically, this book discusses a particular form of stochastic resonance – discovered by Stocks – known as suprathreshold stochastic resonance, and demonstrates how and why this effect is a form of stochastic quantization.