To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The mathematical modeling of epidemics is a very active field of research which crosses different disciplines. Epidemiologists, computer scientists, and social scientists share a common interest in studying spreading phenomena and rely on very similar models for the description of the diffusion of viruses, knowledge, and innovation. Epidemic modeling has developed an impressive array of methods and approaches aimed at describing various spreading phenomena, as well as incorporating many details affecting the spreading of real pathogens. In particular, understanding and predicting an epidemic outbreak requires a detailed knowledge of the contact networks defining the interactions between individuals. The theoretical framework for epidemic spreading has thus to be widened with opportune models and methods dealing with the intrinsic system complexity encountered in many real situations.
In this chapter, we introduce the general framework of epidemic modeling in complex networks, showing how the introduction of strong degree fluctuations leads to unusual results concerning the basic properties of disease spreading processes. Using some specific examples we show how plugging in complex networks in epidemic modeling enables one to obtain new interpretative frameworks for the spread of diseases and to provide a quantitative rationalization of general features observed in epidemic spreading. We end the chapter by discussing general issues about modeling the spread of diseases in complex environments, and we describe metapopulation models which are at the basis of modern computational epidemiology.
In Chapter 6, the case of percolation and resilience of undirected graphs has been considered. Various authors have tackled the problem of its generalization to directed networks (Newman et al., 2001; Schwartz et al., 2002; Dorogovtsev et al., 2001a; Boguñá and Serrano 2005).
As seen in Chapter 1, directed networks are topologically richer and more complex than their undirected counterparts. The links attached to a node can be either incoming or outgoing. The number of in-links is the in-degree kin, and the number of out-links is the out-degree kout. Since some pairs of nodes can be linked by two directed links pointing in the opposite directions, it is also possible to distinguish between strictly incoming, strictly outgoing or bidirectional links, the total degree being the sum of their respective numbers ki, ko, kb, with kin = ki + kb and kout = ko + kb. The degree distribution is thus the joint distribution of these degrees. Most often, strictly directed networks are considered, with no bidirectional links: kb = 0. Note that by a simple conservation rule, the averages of ki and ko are then equal: 〈ki〉 = 〈ko〉. Each (weakly) connected component (WCC) of a directed graph has also an internal structure as described in Chapter 1, and is composed of a strongly connected component (SCC), an in and an out components, tendrils and tubes.
In this chapter we present a review of network models that will be used to study dynamical processes in the context of computational approaches. These different models will also help to determine the influence that specific network features have on the various phenomena that will be considered in the next chapters. To this end, we will discuss the different modeling approaches and put the activity focused on each of the different dynamical models into the proper perspective. Particular emphasis will be devoted to models which have been developed as theoretical examples of the different specific classes of real-world networks empirically observed.
Randomness and network models
Static random graph models and topology generators such as the paradigmatic Erdős–Rényi model (Erdős and Rényi, 1959; 1960; 1961) and the network generation algorithm of Molloy and Reed (1995) are the simplest network models to include stochasticity as an essential element. They are characterized by an absolute lack of knowledge of the principles that guide the creation of edges between nodes. Lacking any information, the simplest assumption one can make is to connect pairs of nodes at random with a given connection probability p. In its original formulation, an Erdős–Rényi graph GN, E is constructed starting from a set of N different vertices which are joined by E edges whose ends are selected at random among the N vertices.
In the past few years, the study of large networked systems has received a boost from the ever-increasing availability of large data sets and computer power for their storage and manipulation. In particular, mapping projects of the World Wide Web and the physical Internet offered the first chance to study the topology of large complex networks. Gradually, other maps followed describing many networks of practical interest in social science, critical infrastructures, and biology. Indeed, large complex networks arise in a vast number of natural and artificial systems. The brain consists of many interconnected neurons; ecosystems consist of species whose interdependency can be mapped into intricate food webs. Social systems may be represented by graphs describing various interactions among individuals. Large networked infrastructures such as power grids and transportation networks are critical to our modern society. Finally, the living cell is not an exception, its organization and function being the outcome of a complex web of interactions among genes, proteins, and other molecules. In the search for the underlying laws governing the dynamics and evolution of these complex systems, researchers have started the systematic analysis and characterization of their network representations. A central result of these activities is that large-scale networks are generally characterized by complex topologies and very heterogeneous structures. These features usually find their signature in connectivity patterns statistically characterized by heavy tails and large fluctuations, scale-free properties, and non-trivial correlations such as high clustering and hierarchical ordering.
The large-scale collapses of the electric distribution grid or internet outages have shown that the stability of complex networks is a problem of crucial importance. The protection of critical infrastructures or the elaboration of efficient reaction strategies all rely on the identification of crucial (or weak) elements of the network and on the understanding of the progressive damage caused by successive removals or failures of nodes. In this context, it has been observed that complex network models usually display great stability even if they are confronted by a large number of repeated small failures, while at the same time major damage can be triggered, unpredictably, by apparently small shocks. This is also consistent with empirical experience in which unexpected, small perturbations may sometimes trigger large systemic failures.
In this chapter, we deal with the impact of the removal or failure of nodes in complex networks by studying their percolation behavior. Percolation models and the associated critical phenomena have been extensively studied in statistical physics and, even though they are not endowed with a high level of realism, they can be thought of as the zeroth order approximation to a wide range of damaging processes. In particular we focus on heterogeneous networks that typically display both a large robustness to random failures and a high vulnerability to targeted attacks on the most crucial nodes.
The navigation and exploration of complex networks are obviously affected by the underlying connectivity properties and represent a challenging scientific problem with a large number of practical applications. The most striking example is the World Wide Web (WWW), an immense data repository which acquires a practical interest and value only if it is possible to locate the particular information one is seeking. It is in this context and having in mind large-scale information and communication technologies (ICTs) that most of the models and studies are developed. Because information discovery and retrieval rely on the understanding of the properties of random walks and diffusion phenomena in complex networks, considerable activity has been devoted to the investigation of these basic processes.
In this chapter we will review the main strategies that can be used to explore and retrieve information from the vertices of a network. In particular, we want to expose how the topological properties of networks might affect search and retrieval strategies and how these strategies can effectively take advantage of the network's connectivity structure. We start from basic considerations on diffusion processes taking place on networks and naturally dive into more applied works that consider well-defined problems encountered in the ICT world.
Diffusion processes and random walks
The discovery process and the related algorithms in large-scale networks are clearly related to the properties of a random walk diffusing in a network of given topology.
The main role of many networks is to provide the physical substrate to the flow of some physical quantity, data, or information. In particular, this is the case for the transportation and technological infrastructures upon which the everyday functioning of our society relies. In this context, congestion phenomena, failures, and breakdown avalanches can have dramatic effects, as witnessed in major blackouts and transportation breakdowns. Also, in technological networks, if the single element does not have the capacity to cope with the amount of data to be handled, the network as a whole might be unable to perform efficiently. The understanding of congestion and failure avalanches is therefore a crucial research question when developing strategies aimed at network optimization and protection.
Clearly the study of network traffic and the emergence of congestion and large scale failures cannot neglect the specific nature of the system. The huge literature addressing these questions is therefore domain oriented and goes beyond the scope of this book. On the other hand, large-scale congestion and failure avalanches can also be seen as the emergence of collective phenomena and, as such, studied through simple models with the aim of abstracting the general features due to the basic topology of the underlying network. This chapter is devoted to an overview of results concerning traffic, congestions, and avalanches and is more focused on the general and ubiquitous features of these phenomena than on system-specific properties.
Networks have long been recognized as having a central role in biological sciences. They are the natural underlying structures for the description of a wide array of biological processes across scales varying from molecular processes to species interactions. Especially at smaller scales, most genes and proteins do not have a function on their own; rather they acquire a specific role through the complex web of interactions with other proteins and genes. In recent years this perspective, largely fostered by the recent abundance of high-throughput experiments and the availability of entire genome sequences and gene co-expression patterns, has led to a stream of activities focusing on the architecture of biological networks.
The abundance of large-scale data sets on biological networks has revealed that their topological properties in many cases depart considerably from the random homogeneous paradigm. This evidence has spurred intense research activity aimed at understanding the origin of these properties as well as their biological relevance. The problem amounts to linking structure and function, in most cases, by understanding the interplay of topology and dynamical processes defined on the network. Empirical observations of heterogeneities have also revamped several areas and landmark problems such as Boolean network models and the issue of stability and complexity in ecosystems.
While concepts and methods of complex network analysis are nowadays standard tools in network biology, it is clear that a discussion of their relevance and roles has to be critically examined by taking into account the specific nature of the biological problem.
The present chapter is intended to provide a short introduction to the theory and modeling of equilibrium and non-equilibrium processes on networks and to define the basic modeling approaches and techniques used in the theory of dynamical processes. In particular, we define the master equation formalism and distinguish between equilibrium and non-equilibrium phenomena. Unfortunately, while the master equation allows for important conceptual distinction and categorization, its complete solution is hardly achievable even for very simple dynamical processes. For this reason we introduce the reader to techniques such as mean-field and continuous deterministic approximations, which usually represent viable approaches to understand basic features of the process under study. We also discuss Monte Carlo and agent-based modeling approaches that are generally implemented in large-scale numerical simulation methods.
These different theoretical methods help to define a general framework to demonstrate how the microscopic interactions between the elements of the system lead to cooperative phenomena and emergent properties of the dynamical processes. This strategy, going from microscopic interaction to emergent collective phenomena, has its roots in statistical physics methodology and population dynamics, and is currently viewed as a general paradigm to bridge the gap between the local and the large-scale properties of complex systems. It is important to stress, however, that the following material is a greatly abbreviated presentation of a huge field of research and by necessity just scratches the surface of the statistical theory of dynamical processes.
The study of collective behavior in social systems has recently witnessed an increasing number of works relying on computational and agent-based models. These models use very simplistic schemes for the micro-processes of social influence and are more interested in the emerging macro-level social behavior. Agent-based models for social phenomena are very similar in spirit to the statistical physics approach. The agents update their internal state through an interaction with their neighbors and the emergent macroscopic behavior of the system is the result of a large number of these interactions.
The behavior of all of these models has been extensively studied for agents located on the nodes of regular lattices or possessing the ability to interact homogeneously with each other. But as described in Chapter 2, interactions between individuals and the structure of social systems can be generally represented by complex networks whose topologies exhibit many non-trivial properties such as small-world, high clustering, and strong heterogeneity of the connectivity pattern. Attention has therefore recently shifted to the study of the effect of more realistic network structures on the dynamical evolution and emergence of social phenomena and organization. In this chapter, we review the results obtained in four prototypical models for social interactions and show the effect of the network topology on the emergence of collective behavior.
I am conscious of being only an individual struggling weakly against the stream of time. But still remains in my power to contribute …
Ludwig Boltzmann
In Chapter 4 we discussed the connection between chaos and equilibrium statistical mechanics, in particular with respect to the ergodic hypothesis. We saw that in systems with many degrees of freedom, chaos (in the sense of at least one positive Lyapunov exponent) is not strictly necessary (nor sufficient) to obtain good agreement between the time average and phase average. This is due, as Boltzmann himself thought and Khinchin proved for an ideal gas, to the fact that in systems with many components, for a large class of observables, the validity of the ergodic hypothesis is basically a consequence of the law of large numbers, and it has a rather weak connection with the underlying dynamics.
From a conceptual point of view the ergodic approach (possibly in a “weak” variant, only pertaining to some interesting macroscopic variables) can be seen as a natural way to introduce probabilistic concepts in a deterministic context. In addition, since one deals with a unique system (although with many degrees of freedom) the ergodicity is a possible (unique?) way to found the equilibrium statistical mechanics on a physical ground, i.e. by exploiting the frequentistic interpretation of probability to extract a statistical description from the analysis of a single experimental trajectory. Finally, on the basis of the results in Chapter 4, and not forgetting that thermodynamics, as a physical theory, was developed to describe the properties of single systems made of many microscopic interacting parts, it seems to us that it is quite fair to conclude that the ensemble viewpoint is just a useful mathematical tool.
If we study the history of science we see produced two phenomena which are, so to speak, each the inverse of the other. Sometimes it is simplicity which is hidden under what is apparently complex; sometimes, on the contrary, it is simplicity which is apparent, and which conceals extremely complex realities.
Henri Poincaré
Devoting only a short chapter to the renormalization group (RG) is a challenge owing to the wealth of both fundamental concepts and computational tools associated with this term. The RG is indeed encountered in many different domains of theoretical physics, ranging from quantum electrodynamics to second-order phase transitions to fractal growth and diffusion processes – one should rather speak of renormalization groupS! We refer to Brown (1993) and Fisher (1998) for an historical account, to Goldenfeld (1992) and Lesne (1998) and references therein for an overview of the domains of application and variants of RGs.
We here emphasize that the RG is a way, maybe the most successful and with no doubt the most systematic and constructive, to derive effective low-dimensional descriptions that capture large-scale and/or long-time behavior. The RG can be extended far beyond the specific scope of critical phenomena, to an iterated multiscale approach allowing the construction of robust and minimal macroscopic models describing the universal large-scale features and asymptotics of a complex system. This generalized viewpoint brings out the close logical and even technical connections that bridge, within a unified framework, perturbative RG for singular series expansions, spin-block RG and momentum-shell RG for critical phenomena, RG for the asymptotic analysis of differential and partial differential equations, and probabilistic RG for the derivation of statistical laws and limit theorems.
Since in the differential equations of mechanics themselves there is absolutely nothing analogous to the Second Law of thermodynamics the latter can be mechanically represented only by means of assumptions regarding initial conditions.
Ludwig Boltzmann
By irreversibility here we mean a well evident fact which is part of the everyday experience of everybody: there are a lot of phenomena of which we do not see the reverse order evolution, and we do not expect to see it. For instance, if we put a hot coffee on the table then, after a while it gets colder, transferring some heat to the environment. However, if the coffee, at room temperature, has become too cold, and we want to warm it up, then we put it in the microwave oven. This is because we are pretty sure that the required heat will not come into the coffee from the surroundings, even if the reverse process has just taken place, as always. This certainty allows us to judge time ordering. Given the two states of the coffee on the table, hot and cold, we know that, without external intervention, the hot state cannot come after the cold state: it always comes before. As a consequence, once the cooling has occurred, we say that something (spontaneously) irreversible has happened. So, irreversibility is the asymmetric time evolution of certain macroscopic systems. The theoretical frame where this kind of irreversibility is accommodated is thermodynamics, where the Second Principle dictates the prohibitions.
The problem
As a fact within the coherent theoretical description of thermodynamics, irreversibility is not a problem.