To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
The existence of a gap between theory and practice of risk and reliability in water resources management has been widely recognized. An overview of stages of risk management is offered. It consists of: identification of failure modes; evaluation of likelihood of each failure mode; listing the consequences of each failure; risk valuation and mitigation. The results of these stages are synthesized, to form the basis for selection of an optimal plan or policy. Finally, the role of forecasting in risk management practice is reviewed.
INTRODUCTION
There is a gap between the theory and practice of incorporating considerations of risk and reliability into management of water resources systems. The reasons for this are:
Criteria for defining risk and quantifying reliability have not been standardized and accepted; and
Methodologies for incorporating reliability measures and criteria into procedures and formal models for management of water supply systems are still not well developed, and the complexity of those methods and models that do exist make them difficult to use in engineering practice.
Reliability criteria should be defined from the point of view of the consumer, and reflect the costs of less-than-perfect reliability. Here “cost” means the measurable economic losses due to failures, such as the loss of crops that are not irrigated on time or the loss of production in a factory, and any other loss incurred by failure to supply an adequate quantity of goodquality water at the time it is required. Losses are very difficult to measure and quantify, especially those associated with the quality of life of urban consumers.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
We are pleased to offer the reader a volume consisting of contributions of the Third George Kovacs Colloquium held in UNESCO, Paris from September 19 to 21, 1996. It is a continuation of a series of biannual international scientific meetings organized jointly under the auspices of the International Hydrological Programme (IHP) of UNESCO and the International Association of Hydrological Sciences (IAHS) in the challenging fields of water resources research. These meetings commemorate the late Professor George Kovacs, established authority in hydrology, who paid valuable service to both organizations convening this Colloquium. Professor Kovacs was Chairman of the Intergovernmental Council of the IHP of UNESCO and President of the IAHS.
The theme of the Colloquium, “Risk, Reliability, Uncertainty, and Robustness of Water Resources Systems,” denotesan essential recent growth area of research into water resources, with challenges and difficulties galore. The two-and-a-half-day Colloquium included twenty-four oral presentations covering a broad range of scientific issues. It dealt with different facets of uncertainty in hydrology and water resources and with several aspects of risk, reliability, and robustness.
The contributions to the Colloquium concentrated on the state-of-the-art approaches to the inherent problems. They also outline the possible future, identify challenging prospects for further research and applications. Presentations included both theoretical and applied studies, while several papers dealt with regional problems. Methodological contributions focused on underlying concepts and theories.
The presentations at the Colloquium, based on invitation, were delivered in three categories: keynote lectures, invited lectures, and young scientists' communications. Contributions belonging to all three categories are included in this volume.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
Investigations of the impact of climate change on water resources systems usually involve detailed monthly hydrological, climatological, and reservoir systems models for a particular system. The conclusions derived from such studies only apply to the particular system under investigation. This study explores the potential for developing a regional hydroclimatological assessment model useful for determining the impact of changes in climate on the behavior of water supply systems over a broad geographic region. Computer experiments performed across the United States reveal that an annual streamflow model is adequate for regional assessments which seek to approximate the behavior of water supply systems. Using those results, a general methodology is introduced for evaluating the sensitivity of water supply systems to climate change in the northeastern United States. The methodology involves the development of a regional hydroclimatological model of annual streamflow which relates the first two moments of average annual streamflow to climate and drainage area at 166 gaging stations in the northeastern United States. The regional hydroclimatological streamflow model is then combined with analytic relationships among water supply system storage, reliability, resilience, and yield. The sensitivity of various water supply system performance indices such as yield, reliability, and resilience are derived as a function of climatical, hydrological, and storage conditions. These results allow us to approximate, in general, the sensitivity of water supply system behavior to changes in the climatological regime as well as to changes in the operation of water supply systems.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
Bayesian methods have been developed to analyze three main types of uncertainties, namely: the model uncertainty, the parameter uncertainty, and the sampling errors. To illustrate these techniques on a real case study, a model has been developed to quantify the various uncertainties when predicting the global proportion of coliform positive samples (CPS) in a water distribution system where bacterial pollution indicators are weekly monitored by sanitation authorities. The data used to fit and validate the model correspond to water samples gathered in the suburb of Paris. The model uncertainty has been evaluated in the reference class of generalized linear multivariate autoregressive models. The model parameter distributions are determined using the Metropolis-Hastings algorithm, one of the Monte Carlo Markov Chain methods. Such an approach, successful when dealing with water quality control, should also be very powerful for rare events modeling in hydrology or in other fields such as ecology.
INTRODUCTION
The bacterial pollution indicators are understood here as the coliforms, a group of bacteria that is “public enemy number one” for water suppliers. Their occurrence in domestic distributed waters is a major concern for many utility companies. The coliform group includes many different species, the most famous one being Escherichia coli. Part of the bacteria belonging to the coliforms group are fecal bacteria and may provoke gastroenteritis or other digestive problems. The other inoffensive part is generally considered as an indicator of a possible presence of their more dangerous cousins.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
Effective protection of a drinking water well against pollution by persistent compounds requires the knowledge of the well's capture zone. This zone can be computed by means of groundwater flow models. However, because the accuracy and uniqueness of such models is very limited, the outcome of a deterministic modeling exercise may be unreliable. In this case stochastic modeling may present an alternative to delimit the possible extension of the capture zone. In a simplified example two methods are compared: the unconditional and the conditional Monte Carlo simulation. In each case realizations of an aquifer characterized by a recharge rate and a transmissivity value are produced. By superposition of capture zones from each realization, a probability distribution can be constructed which indicates for each point on the ground surface the probability to belong to the capture zone. The conditioning with measured heads may both shift the mean and narrow the width of this distribution. The method is applied to the more complex example of a zoned aquifer. Starting from an unconditional simulation with recharge rates and transmissivities randomly sampled from given intervals, observation data of heads are successively added. The transmissivities in zones that do not contain head data are generated stochastically within boundaries typical for the zone, while the remaining zonal transmissivities are now determined in each realization through inverse modeling. With a growing number of conditioning data the probability distribution of the capture zones is shown to narrow. The approach also allows the quantification of the value of data. Data are the more valuable the larger the decrease of uncertainty they lead to.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
Management of water resources is inherently subject to uncertainties due to data inadequacy and errors, modeling inaccuracy, randomness of natural phenomena, and operational variability. Uncertainties are associated with each of the contributing factors or components of a water resources system and with the system as a whole. Uncertainty can be measured in terms of the probability density function, confidence interval, or statistical moments such as standard deviation or coefficient of variation. In this chapter, available methods for uncertainty analysis are briefly reviewed.
INTRODUCTION
The planning, design, and operation of water resources systems usually involve many components or contributing factors. Each of the components or factors individually and the system as a whole are always subject to uncertainties. For example, the reliability of flood forecast depends not only on the uncertainty of the prediction model itself but also on the uncertainties on the input data. The design of a storm drain is subject to the uncertainty on the runoff simulation model used, uncertainties on the design storm determination, as well as uncertainties on the materials, construction, and maintenance used. Water supply is always subject to uncertainties on the demand, availability of the sources of water, and the performance of the distribution network. Safety of a dam depends not only on the magnitude of the flood but also on the waves, earthquake, conditions of the foundation, and maintenance and appropriateness of the operational procedure. Numerous other examples can be cited. In short, decisions on water resources are always subject to uncertainties.
Knowledge of uncertainties is useful for rational decision making, for cost-effective design, for safe operation, and for improved public awareness of water resources risks and reliability.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
The primary objective of this chapter is the development of an appropriate social indicator which represents society's robustness against disaster risk. With a focus on the risk of drought, a sociopsychological approach based on the concept of social representation is presented. As the working hypothesis, the society's perceived level of readiness against drought (SPRD) is defined in terms of the social message of relevant newspaper articles by its article area. Using actual drought cases for the cities of Takamatsu and Fukuoka, this working hypothesis has been examined and reexamined from two different analytical viewpoints – that is, a contextual analysis and an analysis of the water-saving phenomenon. It is shown that the results are basically positive in support of our working hypothesis.
INTRODUCTION
The Great Hanshin-Awaji Earthquake, which struck the heart of the Kobe-centered metropolis on January 17, 1995, has demonstrated the often forgotten fact that the citizens of a large city are bound to coexist with the risks of an urban disaster. Though not so catastrophic as this earthquake, several metropolitan regions in Japan experienced a drought of unprecedented scale during the preceding summer. Among them are the cities of Fukuoka (on Kyushu Island) and Takamatsu (on Shikoku Island), which underwent the most serious socioeconomic damage. This must have strengthened the awareness that in society, large cities are bound to coexist with the risks of urban disaster. In both cases, citizens seemed to have learned that the community's disaster risk awareness makes a difference when examining the criticality of the disaster damage.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
A critical contemplation is offered of what is known and what is not known about the hydrological, economic, and societal uncertainties that are eagerly and routinely subjected to all kinds of mathematical prestidigitation in the process of risk analysis of water resources systems. An illustration of the issues is provided by a reality check on some real-life situations, mostly based on the author's own experience from the past forty years. Three simple recommendations are offered that may bring risk analysis down to earth.
INTRODUCTION
Spurred by the newly available computing technology, the influx of mathematical statistics and probability theory into the analysis of uncertainty in hydrology and water resources in the 1960s was powerful and unprecedented. Also unprecedented has been its side effect – an outburst of “applications” of various spurious theories suddenly made so easy by the computer. By the mid-1970s the danger of this malignant growth was already evident and solitary warning voices could be heard. One of the clearest and most passionate was that of the late Myron Fiering of Harvard University: “Fascination with automatic computation has encouraged a new set of mathematical formalisms simply because they now can be computed; we have not often enough asked ourselves whether they ought to be computed or whether they make any difference … we build models to serve models to serve models to serve models, and with all the computation, accumulated truncation, roundoff, sloppy thinking, and sources of intellectual slippage, there is some question as to how reliable are the final results” (Fiering 1976).
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
Motto: Sustainable is what people agree is sustainable.
Abstract
This chapter introduces the field of integrated regional risk assessment and safety management for energy and other complex industrial systems. The international initiative includes compilation of methods and guidelines, and development of various models and decision support systems to assist implementation of various tasks of risk assessment at the regional level. The merit of GIS methodology is highlighted.
INTRODUCTION
Almost ten years after the UNCED (United Nations Conference on Environment and Development), Rio de Janeiro, Brazil, 1992, some progress has been achieved in relation to the protection of the environment, development policies, and strategical future topics. A number of issues were addressed by UNCED – Agenda 21 that were connected with the topic of this chapter.
Issue 1. Achieving sustainable development, environmental protection shall constitute an integral part of the development process.
Issue 2. Environmental issues are best handled with the participation of all concerned citizens.
Issue 3. National authorities should endeavor to promote the internalization of environmental costs.
Issue 4. Information for decision making would involve:
the data gap;
availability of information.
Issue 5. Emergency planning and preparedness are integral parts of a coherent sustainable development.
Regional risk assessment and safety planning is a coordinated strategy for risk reduction and safety management in a spatially defined region, across a broad range of hazard sources. It deals equally with normal operation of plants as well as with accidental situations, including synergetic effects.
Edited by
Janos J. Bogardi, Division of Water Sciences, UNESCO, Paris,Zbigniew W. Kundzewicz, Research Centre of Agricultural and Forest Environment, Polish Academy of Sciences
The problem of water quality management under uncertain emission levels, reaction rates, and pollutant transport is considered. Three performance measures – reliability, resiliency, and vulnerability – are taken into account. A general methodology for finding a cost-effective water quality management program is developed. The approach employs a new stochastic branch and bound method that combines random estimates of the performance for subsets of decisions with iterative refinement of the most promising subsets.
INTRODUCTION
Devising successful and cost-effective water quality management strategies can be difficult because the inputs to, and the behavior of, the system being managed are never entirely predictable. Decision makers do not know what conditions will exist in the future nor how these conditions will affect the impact of their decisions on the environment. Vincens, Rodriguez-Iturbe, and Schaake (1975) classify uncertainty in modeling hydrologic systems into three categories: uncertainty in the model structure (Type I uncertainty); uncertainty in the model parameters (Type II uncertainty); and uncertainty resulting from natural variability (Type III uncertainty). For water quality systems, uncertainty in the pollutant transport model, the model reaction rates, and the natural variability of emission rates and receiving water conditions, such as streamflow, temperature, and background pollutant loadings from unregulated pollution sources, contribute to difficulties in predicting the future behavior of the system (Beck 1987). This chapter develops an approach for identifying water quality management solutions under Type II and Type III uncertainty. It is based on an application of the stochastic branch and bound method of Norkin, Ermoliev, and Ruszczyński (1994) to water quality management, which is modified to account for the performance indicators of reliability, resiliency, and vulnerability.
In chapter 5, we developed a logical framework for assessing impacts, including the necessity for control or reference areas, and for sampling to occur before and after the putative impact. In this chapter, we consider practical details of the monitoring, focusing on the formal design and statistical analysis appropriate to the detection of impacts, and on the practical details associated with executing these designs in running waters. It is important to realize that we can only translate general design principles into a specific plan to collect data if we specify the statistical model that is to be fitted to the data. Perhaps the most important message of this chapter is that apparently similar monitoring ‘questions’ can have quite different statistical models behind them. These different models, in turn, can lead to quite different advice about how to optimize a particular data collection program.
If we consider the two major tasks outlined earlier, the formal test for the existence of an unacceptable impact and the characterization of the spatial extent of any impact, the latter procedure is relatively straightforward in terms of the design and underlying statistical models. The detailed design is modified by practical considerations associated with stream environments, and the characteristics of the activity suspected of having an impact.
In contrast, there is a range of design options for detecting an impact.
Following the arguments and considering the issues presented in this book will allow us to design an effective and flexible monitoring program to detect and evaluate impacts in flowing waters. This includes the negotiations of what effect size is important to detect and what elements can be traded off or even sacrificed (as discussed in chapters 12 and 13). So now we've implemented the monitoring design and presumably detected (or not) some impact with known confidence – but the job is not yet finished. Further negotiations are in order to continue or refine assessment. Truly effective management of impacts requires that some action follows the well-designed studies we have so far advocated in this book. This chapter discusses issues that are central to what needs to be done after the main monitoring task has been completed.
LINKS WITH MANAGEMENT DECISIONS AS POINTS OF NEGOTIATION
We have emphasized the role that input from monitoring data (in terms of results and their interpretation) should have in management decision-making if we are to be engaged in a task that makes any difference. There is an imperative for environmental assessments to become more sophisticated and responsive to societal needs. Monitoring can be reactive (used only once an impact is clearly observed), proactive (seeking to assess impacts before they manifest themselves; see Fairweather 1993; Fairweather & Lincoln Smith 1993) or progress through adaptive learning. The latter means not just trying to benefit from mistakes but also combining elements of learning from both the scientific and management sides.
Statistical decision theory has a long history and can basically be viewed in two ways. Classical statistical hypothesis-testing in the Neyman–Pearson form (Neyman & Pearson 1928), which we described in chapter 4, emphasizes decision errors from the test of a null hypothesis, and these errors have a frequentist interpretation. In contrast, what is termed modern statistical decision theory has a strong Bayesian influence (Berger 1985; Hamburg 1985; Pratt et al. 1996) and has emphasized monetary costs and benefits from decisions in an economic and management context. Nonetheless, all statistical decision problems have certain characteristics in common (Box 12.1; Hamburg 1985; Neter et al. 1993). In this chapter, we will focus on errors associated with the components of the decision-making process and how the choice of criteria for making decisions interacts with the design of the monitoring program.
MAKING STATISTICAL DECISIONS
We need to examine the errors possible from a statistical decision-making process in an environmental monitoring context. In chapter 4 (see Table 4.4) we defined two possible types of error. These errors arise because we are making decisions about the truth or otherwise of hypotheses about unknown population parameters from imperfect samples. If we could record an entire population, such as all the possible locations upstream and downstream of the mine, then we could make decisions about the truth of hypotheses about those parameters without sampling error. Errors of inference may still arise, due to measurement error and confounding.
In the last chapter, we introduced the different sorts of analytical models we can use to detect impacts. There are distinct and important differences between these models, and yet understanding these differences is only the first step. The next step is to be able to apply these designs sensibly and usefully in real situations. In many cases as we will see, the ecology of streams and rivers (and, indeed, many other environments) is not sufficiently well understood to make perfect or even very good decisions about design in every instance. As we shall argue below, however, the critical issue is to understand how to make good design decisions, and, because monitoring designs always involve compromises, to be very clear and explicit about the reasoning upon which such decisions were based. The remaining part of the process, then, is to understand what monitoring designs can tell you definitively and what they cannot – what can be legitimately and logically interpreted from the data versus what will remain unclear.
Below we describe a hierarchy of decisions that will help define the nature of the problems faced in monitoring impacts in flowing waters properly. There are several problems we have to solve to apply good design principles. These problems are the location and character of control locations, and the frequency of sampling through time. In many places, we will suggest that a systematic and well-structured review of the literature will be necessary to solve these problems.
Chapter 5 described the logic behind the BACI approach and chapter 7 described the four basic BACI-type analytical models and the relative strengths of inference each provides (Table 7.1). We can make strong inferences (those with least uncertainty) about the effects of human impacts by examining differences between control and impact locations before and after the onset of human activity, most especially when we have replication of these design elements. However, what happens when one or more BACI elements are entirely missing or when we have no replication? Perhaps the most common problem facing environmental managers is where putative impacts have already occurred, tens or even hundreds of years before, and there is no scope for planning a Before period. There may be no control locations because all suitable locations have suffered the same human activity in question. The latter problem is particularly common when modern human activities are spread over large spatial scales because, as indicated earlier, it reduces the potential numbers of places we can search for controls. How should we proceed in these circumstances?
We must recognize first that the difficulties created here are ones of increased inferential uncertainty, not which analytical model to apply. When one or more of the four elements are missing, we lack the information that would otherwise allow us to distinguish, with some confidence, those changes caused by human impacts from those caused by alternative (natural) phenomena (Table 9.1; and see chapter 5).
In this chapter we discuss the basics of good monitoring design. ‘Design’ here means the stipulation of where, when and how many observations or sampling units are taken to provide the data from which we will make inferences against some specified objectives. We discuss here the underlying principles that we consider central to good design, and present an ideal case. In the interests of establishing an understanding of why elaborate designs are often presented, we ignore for the moment the ubiquitous compromises that are necessary for logistic, social or economic reasons. We do not focus here on particular variables (chapter 10), what sorts of changes are considered important (chapter 11) or the specifics of natural systems in the interests of presenting the general principles that underlie good monitoring for most variables in almost any system. Nor do we discuss here the analytical tools used to refine or optimize designs or analyse the resultant data (chapters 7–13). This chapter should be read, therefore, as a conceptual overview of the design principles that motivate us and which will be expanded in operational detail throughout later chapters.
We recognize that ‘ideal’ designs will rarely, if ever, be feasible (for a variety of reasons) and discuss in later chapters what compromises are most likely to be precipitated by the characteristics of streams (chapter 8) or because of accidents of history, money etc. In beginning here with an outline of the concepts behind an ‘ideal’ case, we seek to establish the principle that all these inevitable compromises are just that – compromises.