To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
TU economies with private production are shown to have a value, as defined in Mertens (1988), without any differentiability or interiority or other restriction. An explicit formula is given, describing the value as a barycenter of the core, for a probability distribution depending only on the set of net trades at equilibrium.
Introduction
We prove existence and exhibit the formula for the value of transferable utility markets, getting rid of differentiability assumptions and allowing for private production (as a first step toward removing the Aumann—Perles assumptions and the monotonicity assumptions). Under differentiability assumptions, the treatment by Aumann and Shapley yielded equivalence with the core. In the nondifferentiable case the more powerful value constructed in Mertens (1988) is required. In particular, whereas the differentiable case uses the symmetry axiom only in a “first-order” sense—comparable to the strong law of large numbers—in the nondifferentiable case it is used in its full force, in a “second-order” sense—comparable to the central limit theorem. But contrary to the case of the central limit theorem, no normal distribution appears here. Indeed, as shown in Hart (1980), the formulas involving those would satisfy only a restricted symmetry property (and are not characterized by it), so no value would be obtained.
In their book Values of Non-Atomic Games, Aumann and Shapley [1] define the value for spaces of nonatomic games as a map from the space of games into bounded finitely additive games that satisfies a list of plausible axioms: linearity, symmetry, positivity, and efficiency. One of the themes of the theory of values is to demonstrate that on given spaces of games this list of plausible axioms determines the value uniquely. One of the spaces of games that have been extensively studied is pNA, which is the closure of the linear space generated by the polynomials of nonatomic measures. Theorem B of [1] asserts that a unique value Ø exists on pNA and that ||Ø|| = 1. This chapter introduces a canonical way to approximate games in pNA by games in pNA that are “identified” with finite games. These are the multilinear nonatomic games—that is, games v of the form v = F° (μ1,μ2, …, μn), where F is a multilinear function and μ1, μ2, …, μn are mutually singular nonatomic measures.
The approximation theorem yields short proofs to classic results, such as the uniqueness of the Aumann—Shapley value on pNA and the existence of the asymptotic value on pNA (see [1, Theorem F]), as well as short proofs for some newer results such as the uniqueness of the μ value on pNA(μ) (see [4]).
The most frequently discussed method of revising a subjective probability distribution P to obtain a new distribution P*, based on the occurrence of an event E, is Bayes's rule: P*(A) = P(AE)/P(E). Richard Jeffrey (1965, 1968) has argued persuasively that Bayes's rule is not the only reasonable way to update: use of Bayes's rule presupposes that both P(E) and P(AE) have been previously quantified. In many instances this will clearly not be the case (for example, the event E may not have been anticipated), and it is of interest to consider how one might proceed.
Example. Suppose we are thinking about three trials of a new surgical procedure. Under the usual circumstances a probability assignment is made on the eight possible outcomes Ω = {000, 001, 010, 011, 100, 101, 110, 111}, where 1 denotes a successful outcome, 0 not. Suppose a colleague informs us that another hospital had performed this type of operation 100 times, with 80 successful outcomes. This is clearly relevant information and we obviously want to revise our opinion. The information cannot be put in terms of the occurrence of an event in the original eight-point space Ω, and the Bayes rule is not directly available. Among many possible approaches, four methods of incorporating the information will be discussed: (1) complete reassessment; (2) retrospective conditioning; (3) exchangeability; (4) Jeffrey's Rule.
The weighing of evidence may be viewed as a mental experiment in which the human mind is used to assess probability much as a pan balance is used to measure weight. As in the measurement of physical quantities, the design of the experiment affects the quality of the result.
Often one design for a mental experiment is superior to another because the questions it asks can be answered with greater confidence and precision. Suppose we want to estimate, on the basis of evidence readily at hand, the number of eggs produced daily in the US. One design might ask us to guess the number of chickens in the US and the average number of eggs laid by each chicken each day. Another design might ask us to guess the number of people in the US, the average number of eggs eaten by each person, and some inflation factor to cover waste and export. For most of us, the second design is manifestly superior, for we can make a reasonable effort to answer the questions it asks.
As this example illustrates, the confidence and precision with which we can answer a question posed in a mental experiment depends on how our knowledge is organized and stored, first in our mind and secondarily in other sources of information available to us.
The theory of values of nonatomic games as developed by Aumann and Shapley was first applied by Billera, Heath, and Raanan (1978) to set equitable telephone billing rates that share the cost of service among users. Billera and Heath (1982) and Mirman and Tauman (1982a) “translated” the axiomatic approach of Aumann and Shapley from values of nonatom games to a price mechanism on the class of differentiable cost functions and hence provided a normative justification, using economic terms only, for the application of the theory of nonatomic games to cost allocation problems. New developments in the theory of games inspired parallel developments to cost allocation applications. For instance, the theory of semi-values by Dubey, Neyman, and Weber (1981) inspired the work of Samet and Tauman (1982), which characterized the class of all “semi-price” mechanisms (i.e., price mechanisms that do not necessarily satisfy the break-even requirement) and led to an axiomatic characterization of the marginal cost prices. The theory of Dubey and Neyman (1984) of nonatomic economies inspired the work by Mirman and Neyman (1983) in which they characterized the marginal cost prices on the class of cost functions that arise from long-run production technologies. Young's (1984) characterization of the Shapley value by the monotonicity axiom inspired his characterization (Young 1985a) of the Aumann—Shapley price mechanism on the class of differentiable cost functions.
The analysis of medical practice as a decision-making process underscores the proposition that the choice of a therapy should reflect not only the knowledge and experience of the physician but also the values and the attitudes of the patient (McNeil, Weischselbaum, and Pauker, 1981). But if patients are to play an active role in medical decision making – beyond passive informed consent – we must find methods for presenting patients with the relevant data and devise procedures for eliciting their preferences among the available treatments. However, the elicitation of preferences, for both patients and physicians, presents a more serious problem than one might expect. Recent studies of judgment and choice have demonstrated that intuitive evaluations of probabilistic data are prone to widespread biases (Kahneman, Slovic, and Tversky, 1982), and that the preference between options is readily influenced by the formulation of the problem (Tversky and Kahneman, 1986).
In a public health problem concerning the response to an epidemic, for example, people prefer a risk-averse strategy when the outcomes are framed in terms of the number of lives saved and a risk-seeking strategy when the same outcomes are framed in terms of the number of lives lost. The tendency to make risk-averse choices in the domain of gains and risk-seeking choices in the domain of losses is a pervasive phenomenon that is attributable to an S-shaped value (or utility) function, with an inflection at one's reference point (Kahneman and Tversky, 1979, 1984).
Normative decision theory is the study of guidelines for right action. It involves the formulation and defense of principles of comparative evaluation and choice among competing alternatives, proposed as rules that individuals or societies ought to – or perhaps would want to – follow. It deals also with the implications of these principles both on an abstract level and in reference to particular types of decision situations. The general subject is vast since it covers numerous ethical and normative social theories developed during the past few millennia.
The aim of the present chapter is exceedingly narrow in view of the larger perspective of the subject. It is to discuss a comparatively recent episode in the history of normative decision theory that has been heavily influenced by eighteenth-century Enlightenment thought and the subsequent ascendency of rationalism and scientific method in the analysis of human behavior. The principals in this episode are, with few exceptions, twentieth-century mathematicians, economists, and statisticians. The exceptions include Daniel Bernoulli (1738), who proposed a theory to explain why choices of prudent individuals among risky monetary options often violate the principle of expected profit maximization, and the Rev. Thomas Bayes (1763), who helped to pioneer the notion of probability as a theory of rational degrees of belief.
The theory I wish to describe is most succinctly known as expected utility theory. This is actually a family of related theories that divide into two subfamilies differentiated by the phrases (Luce and Raiffa, 1957) “decision making under risk” and “decision making under uncertainty.”
We study multiperson games in characteristic function form with transferable utility. The problem is to solve such a game (i.e., to associate to it payoffs to all the players).
Three main solution concepts are as follows. The first was introduced by von Neumann and Morgenstern: A “stable set” of a given game is a set of payoff vectors; such a set, if it exists, need not be unique. Next came the “core,” due to Shapley and Gillies, which is a unique set of payoff vectors. Finally, the Shapley “value” consists of just one payoff vector. There is thus an apparent historical trend from “complexity” to “simplicity” in the structure of the solution.
We propose now an even simpler construction: Associate to each game just one number! How would the payoffs to all players then be determined? by using the “marginal contribution” principle, an approach with a long tradition (especially in economics). Thus, we assign to each player his or her marginal contribution according to the numbers defined earlier. The surprising fact is that only one requirement, that the resulting payoff vector be “efficient” (i.e., that the payoffs add up to the worth of the grand coalition), determines this procedure uniquely.
A key tool in the modern analysis of policy is benefit–cost analysis. Though its origin goes back to the remarkably prescient paper of Dupuit (1844), its theoretical development came much later, after the “marginal revolution” of the 1870s, and its practical application really dates only from the period after 1950. The underlying theory is that of the notion of economic surplus, to which, after Dupuit, such major figures as Marshall, Pareto, Hotelling, Allais, and Debreu have contributed: for a remarkable synthesis, see Allais (1981).
Without going into technical details, the essential steps in the actual calculation of a surplus depend on using choices made in one context to infer choices that might be made in different contexts. If we find how much individuals are willing to pay to reduce time spent in going to work by one method, e.g., buying automobiles or moving closer to work, we infer that another method of achieving the same saving of time, e.g., mass transit or wider roads, will be worth the same amount. Frequently, indeed, we extrapolate, or interpolate; if it can be shown that the average individual will pay $1,000 a year more in rent to reduce his or her transit time by 30 minutes, we infer that a reduction of 15 minutes is worth $500. This is all very much according to Dupuit's reasoning; he would value an aqueduct by the amount that individuals would be willing to pay for the water to be transported in it (and vice versa, if the opposite inference is useful).
More than three decades have passed since 1954, when L. J. Savage published The Foundations of Statistics. The controversy raised by this book and Savage's subsequent writings is now part of the past. Many statisticians now use Savage's idea of personal probability in their practical and theoretical work, and most of the others have made their peace with the idea in one way or another. Thus the time may be ripe for a reexamination of Savage's argument for subjective expected utility.
Savage's argument begins with a set of postulates for preferences among acts. Savage believed that a rational person's preferences should satisfy these postulates, and he showed that these postulates imply that the preferences agree with a ranking by subjective expected utility. He concluded that it is normative to make choices that maximize subjective expected utility. To do otherwise is to violate a canon of rationality.
In the 1950s and 1960s, Savage's understanding of subjective expected utility played an important role in freeing subjective probability judgment from the strictures of an exaggerated frequent philosophy of probability. Today, however, it no longer plays this progressive role. The need for subjective judgment is now widely understood. Increasingly, the idea that subjective expected utility is uniquely normative plays only a regressive role; it obstructs the development and understanding of alternative tools for subjective judgment of probability and value.
We examine the connection between, and distinction between, decreasing marginal value (whatever that may mean) and risk aversion (from Pratt, 1964). When a decision maker (DM henceforth) declares indifference between $1,500 for certain and a 50–50 lottery with payoffs $0 and $5,000, the DM may have two concerns: (1) a feeling that going from $0 to $2,500 is “worth far more” than going from $2,500 to $5,000, and (2) being “nervous” about the uncertainty in the gamble. We will call the first concern “strength of preference” and the second concern “intrinsic risk aversion.” How much of the $1,000 difference between the arithmetical average of the gamble payoffs and the certainty equivalent is due to each of these concerns? Apart from a natural curiosity about such things we have other motivations:
(a) Many utility-assessment procedures currently rely heavily on answers to questions about gambles (e.g., Keeney and Raiffa, 1976); but decision makers are often uncomfortable with making choices among risky alternatives. Can alternative procedures that do not rely heavily on gambling questions be used justifiably?
(b) Many value-assessment procedures rely exclusively on strength-of-preference protocols (e.g., by comparing increments of gain or loss) and never confront subjects with risky choices. But some of these studies are then used to guide risky-choice options. […]
The chapter on framing by Tversky and Kahneman (1982) demonstrates that normatively inconsequential changes in the formulation of choice problems significantly affect preferences. These effects are noteworthy because they are sizable (sometimes complete reversals of preference), because they violate important tenets of rationality, and because they influence not only behavior but how the consequences of behavior are experienced. These perturbations are traced (in prospect theory; see Kahneman and Tversky, 1979) to the interaction between the manner in which acts, contingencies, and outcomes are framed in decision problems and general propensities for treating values and uncertainty in nonlinear ways.
The present chapter begins by providing additional demonstrations of framing effects. Next, it extends the concept of framing to effects induced by changes of response mode, and it illustrates effects due to the interaction between response mode and information-processing considerations. Two specific response modes are studied in detail: judgments of single objects and choices among two or more options. Judgments are prone to influence by anchoring- and adjustment-processes, which ease the strain of integrating diverse items of information. Choices are prone to context effects that develop as a result of justification processes, through which the deliberations preceding choice are woven into a rationalization of that action. As we shall see, these processes often cause judgments and choices to be inconsistent with one another.