To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The reader who has made it this far may feel that a lot of mileage has been squeezed from one simple concept, namely the zero-sum game with non-detection probability or time to detection as the payoff to the verified party. Is this model really justified, or do the many ‘optimal’ inspection strategies that have been derived in the preceding chapters have little relevance to real-world conflicts of interest, with all their ambiguity and complication?
Even under the assumption of illegal behavior, the zero-sum game seems questionable. What about false alarms, for instance? Their avoidance is clearly in both players' interest. And what about the option of legal behavior? In practice the inspectee chooses this option in the vast majority of cases. Surely that fact should influence the inspector's actions. Which inspection strategies deter violations? Can deterrence be quantified?
We approached these questions briefly in Chapter 6 in the context of attributes sampling. Here we shall face them head-on. There is a more general way of analyzing verification problems, one which takes into account such broader questions and which also fortunately legitimizes the results we have obtained in the foregoing chapters. It involves a deeper application of the formalism of non-cooperative games and the associated solution concept of equilibrium strategies. It treats the deterrence aspect in a novel way by examining a class of games in which the inspector announces his verification strategy in advance and in a credible way.
Trust but verify! is the advice that Lenin is supposed to have imparted to his followers (see the motto of Chapter 1); Trust and Verify is the title of the very useful Bulletin issued by the Verification Technology Information Centre to keep us informed on current developments in arms control, disarmament and associated verification requirements; Verify and enjoy doing it well! might well be the subtitle of this book, which is quite remarkable for its engaging style. It covers a considerable amount of material, including quite technical stuff (with theorems proved and calculations worked out in detail), yet its two authors have managed to write it with such lively prose, and to include such a wealth of witty examples, as to make it a most enjoyable read, as even the casual browser will immediately discover.
But this book should appeal not only to the casual browser: it deserves to be read by all those who have an interest in verification — even if they are unable to appreciate all its mathematical niceties. And it will certainly be studied by those who have a professional involvement in this essential aspect of international relations.
Die im Gegensatz zur aristotelischen Philosophie in der Neuzeit sich durchsetzende Ansicht, daβ ein Erkenntniszusammenhang in der wirklichen Welt nur gefunden werden kann, soweit qualitative Bestimmungen auf quantitative zurückgeführt werden, ist von fundamentaler Wichtigkeit geworden.
— Hermann Weyl
Ut oculus ad colores, auris ad sonos, ita mens hominis non ad quaevis sed ad quanta intelligenda condita est.
— Johannes Kepler
The original incentive for this book was the interest aroused by an article published by the authors in the Bulletin of the European Safeguards Research and Development Association, entitled Inspection Randomization for Pedestrians. In it we demonstrated, with a tongue-in-cheek example, how a simple game-theoretical treatment could justify, and even quantify, a proposal which had often been made for purely pragmatic reasons. The proposal was to concentrate IAEA inspection resources in the most sensitive areas of the nuclear fuel cycle whilst reducing safeguards effort at power reactors. Our article not only supported this idea, but showed that the concentration would not incur any real loss in detection capability.
That short paper now forms the basis for the introductory example in the present book, a work which might well have been given the title Verification Theory for Pedestrians. Wishing to avoid condescension, and notwithstanding such erudite precedents as H. J. Lipkin's classic Lie Groups for Pedestrians, we chose a slightly more pedestrian tit title. We have however tried to maintain the relaxed and informal style of our original article without, we fervently hope, overdoing it.
— the March Hare, Alice and the Mad Hatter (in order of appearance)
Nuclear material safeguards are applied by the International Atomic Energy Agency (IAEA) world-wide in partial fulfillment of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT); see IAEA (1981a). This is the verification regime for which the most experience has been accumulated and for which the most intensive research and development has been carried out. Correspondingly, there exist many qualitative descriptions and analyses of IAEA safeguards (see for example IAEA (1981a), (1985a), (1985b), (1987)) and it is not our intention to add to them here. We'll restrict ourselves to saying that the fundamental technical basis of the IAEA's control measures is material accountancy (IAEA (1972)). This involves first and foremost the verification of reported data on material movements and inventories, either by means of independent physical measurement during on-site inspection or with the aid of tamper-proof sealing and electronic surveillance equipment, and the closing of material balances.
Like nuclear energy production itself, and notwithstanding revelations in the aftermath of the Gulf War, nuclear safeguards are presently not the issue of central importance that they once were. New verification problems and proposals for their solution now dominate international discussion.
Pollution control is a good subject for a data verification theory, one whose relevance is, sadly, destined to grow. In this chapter we consider the monitoring of compliance with regulatory limits on the emission of pollutants from a point source. Our objective is to introduce the concept of statistical testing and to derive a theory for the optimization of variables sampling procedures.
The term variables, in connection with verification by random sampling, was discussed briefly in the introduction to the second chapter. Here is a regulator's definition (US Gov. (1957)):
… (variables inspection) is inspection wherein a specified quality characteristic on a unit of product is measured on a continuous scale, such as pounds, inches, feet per second, etc., and a measurement is recorded. The unit of product is the entity of product inspected in order to determine its measurable quality characteristic. The quality characteristic for variables inspection is that characteristic of a unit of product that is actually measured, to determine conformity with a given requirement …
The reader might now be forgiven for wishing to skip the present chapter altogether, so we'll return (hastily) to our earlier definition: The variables sampling procedure is one which explicitly takes into account measurement errors. The differences between the inspectee's reported data and the inspector's findings are evaluated quantitatively; see for instance Encycl. (1982). Therefore there exists a chance of mistakenly concluding illegal behavior, that is, false alarms are possible.
An inspectorate given the task of controlling activities in a major sector of industry is essentially faced with a large-scale sampling problem. Many locations may offer a potential threat of violation, so a prudent yet effective use of available inspection resources is called for. Above all, priorities must be defined.
An important example arises in the context of recent attempts to eliminate chemical weapons through an international convention. Section 6.1 of this chapter focuses on what is probably the most difficult verification problem associated with the newly negotiated Chemical Weapons Convention (CWC), namely the control of activities not prohibited by the agreement, in particular those which involve the large-scale industrial use of the so-called key precursors. The discussion is used to develop a simple but rather general solution to the resource distribution problem.
In our presentation we will take Thoreau seriously and aggregate all verification details into a priori detection probabilities specific to given inspection locations or possible ‘violation paths’. The term global sampling, which heads this chapter, is intended to characterize the class of models treated. At first, overall detection probability is the single optimization criterion and the option of legal behavior on the part of the inspectee will be ignored. Later, in Section 6.2, we develop and apply a theory of global sampling which additionally takes into account subjective preferences regarding the attractiveness or seriousness of individual violation activities.
Verification procedures usually involve random sampling. Suppose an inspectee is obliged within the framework of an agreement, law, treaty, etc. to report data on inventories, stocks, emissions or transfers. An inspector may then have the task of verifying the reported data with the help of his own independent observations. The inspector's observations will, in general, consist of some representative random sample of the reported data, his time and resources being limited either physically or under the terms of the verification agreement.
An obvious purpose of the sampling procedure is the detection of illegal behavior of the inspectee with some acceptable probability. Equally important, especially in international affairs (see the first chapter), is the certification of legal behavior.
Two main categories of random sampling are conventionally distinguished: attributes and variables sampling. Variables sampling explicitly takes into account measurement errors. The differences between the inspectee's reported data and the inspector's findings are evaluated quantitatively using statistical tests. Consequently there exists a chance of incorrectly concluding illegal behavior or, put another way, the false alarm probability is finite. Attributes sampling, on the other hand, seeks to detect qualitative differences between reported data and inspector observation. Any differences which exist are supposed to be as manifest as those distinguishing lads from lasses: the false alarm probability is zero.
The attributes technique will be illustrated in this chapter with examples taken from the verification of conventional arms control and nuclear non-proliferation. Variables sampling is the subject of the next two chapters.
In this chapter we discuss the solution of problems by a number of processors working in concert. In specifying an algorithm for such a setting, we must specify not only the sequence of actions of individual processors, but also the actions they take in response to the actions of other processors. The organization and use of multiple processors has come to be divided into two categories: parallel processing and distributed processing. In the former, a number of processors are coupled together fairly tightly: they are similar processors running at roughly the same speeds and they frequently exchange information with relatively small delays in the propagation of such information. For such a system, we wish to assert that at the end of a certain time period, all the processors will have terminated and will collectively hold the solution to the problem. In distributed processing, on the other hand, less is assumed about the speeds of the processors or the delays in propagating information between them. Thus, the focus is on establishing that algorithms terminate at all, on guaranteeing the correctness of the results, and on counting the number of messages that are sent between processors in solving a problem. We begin by studying a model for parallel computation. We then describe several parallel algorithms in this model: sorting, finding maximal independent sets in graphs, and finding maximum matchings in graphs. We also describe the randomized solution of two problems in distributed computation: the choice coordination problem and the Byzantine agreement problem.
THE study of random walks on graphs is fascinating in its own right. In addition, it has a number of applications to the design and analysis of randomized algorithms. This chapter will be devoted to studying random walks on graphs, and to some of their algorithmic applications. We start by describing a simple algorithm for the 2-SAT problem, and analyze it by studying the properties of random walks on the line. Following a brief treatment of the basics of Markov chains, we consider random walks on undirected graphs. It is shown that there is a strong connection between random walks and the theory of electric networks. Random walks are then applied to the problem of determining the connectivity of graphs. Next, we turn to the study of random walks on expander graphs. We define a class of expanders and use algebraic graph theory to characterize their properties. Finally, we illustrate the special properties of random walks on expanders via an application to probability amplification.
Let G = (V,E) be a connected, undirected graph with n vertices and m edges. For a vertex v Є V, Γ(v) denotes the set of neighbors of v in G. A random walk on G is the following process, which occurs in a sequence of discrete steps: starting at a vertex v0, we proceed at the first step to a randomly chosen neighbor of V0. This may be thought of as choosing a random edge incident on V0 and walking along it to a vertex v1 Є Γ(v0).
ALL the algorithms we have studied so far receive their entire inputs at one time. We turn our attention to online algorithms, which receive and process the input in partial amounts. In a typical setting, an online algorithm receives a sequence of requests for service. It must service each request before it receives the next one. In servicing each request, the algorithm has a choice of several alternatives, each with an associated cost. The alternative chosen at a step may influence the costs of alternatives on future requests. Examples of such situations arise in data-structuring, resource-allocation in operating systems, finance, and distributed computing.
In an online setting, it is often meaningless to have an absolute performance measure for an algorithm. This is because in most such settings, any algorithm for processing requests can be forced to incur an unbounded cost by appropriately choosing the input sequence (we study examples of this below); thus, it becomes difficult, if not impossible, to perform a comparison of competing strategies. Consequently, we compare the total cost of the online algorithm on a sequence of requests, to the total cost of an offline algorithm that services the same sequence of requests. We refer to such an analysis of an online algorithm as a competitive analysis; we will make these notions formal presently.