To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study multiperson games in characteristic function form with transferable utility. The problem is to solve such a game (i.e., to associate to it payoffs to all the players).
Three main solution concepts are as follows. The first was introduced by von Neumann and Morgenstern: A “stable set” of a given game is a set of payoff vectors; such a set, if it exists, need not be unique. Next came the “core,” due to Shapley and Gillies, which is a unique set of payoff vectors. Finally, the Shapley “value” consists of just one payoff vector. There is thus an apparent historical trend from “complexity” to “simplicity” in the structure of the solution.
We propose now an even simpler construction: Associate to each game just one number! How would the payoffs to all players then be determined? by using the “marginal contribution” principle, an approach with a long tradition (especially in economics). Thus, we assign to each player his or her marginal contribution according to the numbers defined earlier. The surprising fact is that only one requirement, that the resulting payoff vector be “efficient” (i.e., that the payoffs add up to the worth of the grand coalition), determines this procedure uniquely.
1. Interpersonal comparability of utility is generally regarded as an unsound basis on which to erect theories of multipersonal behavior. Nevertheless, it enters naturally—and, I believe, properly—as a nonbasic, derivative concept playing an important if sometimes hidden role in the theories of bargaining, group decisionmaking, and social welfare. The formal and conceptual framework of game theory is well adapted for a broad and unified approach to this group of theories, though it tends to slight the psychological aspects of group interaction in favor of the structural aspects—e.g., complementary physical resources, the channels of information and control, the threats and other strategic options open to the participants, etc. In this note I shall discuss two related topics in which game theory becomes creatively involved with questions of interpersonal utility comparison.
The first topic concerns the nature of the utility functions that are admissible in a bargaining theory that satisfies certain minimal requirements. I shall show, by a simple argument, that while cardinal utilities are admissible, purely ordinal utilities are not. Some intriguing intermediate systems are not excluded. The argument does not depend on the injection of probabilities or uncertainty into the theory.
The second topic concerns a method of solving general n-person games by making use of the interpersonal comparisons of utility that are implicit in the solution.
The competitive equilibrium, core, and value are solution notions widely used in economics and are based on disparate ideas. The competitive equilibrium is a notion of noncooperative equilibrium based on individual optimization. The core is a notion of cooperative equilibrium based on what groups of individuals can extract from society. The value can be interpreted as a notion of fair division based on what individuals contribute to society. It is a remarkable fact that, under appropriate assumptions, these solution notions (nearly) coincide in large economies. The (near) coincidence of the competitive equilibrium and the core for large exchange economies was first suggested by Edgeworth (1881) and rigorously established by Debreu and Scarf (1963) in the context of replica economies and by Aumann (1964) in the context of continuum economies. This pioneering work has since been extended to much wider contexts; see Hildenbrand (1974) and Anderson (1986) for surveys. The (near) coincidence of the value and the competitive equilibrium (and hence the core) for large exchange economies was first suggested by Shubik and rigorously established by Shapley (1964) in the context of replica economies with money. This pioneering work, too, has since been extended to much wider contexts; see, for example, Shapley and Shubik (1969), Aumann and Shapley (1974), Aumann (1975), Champsaur (1975), Hart (1977), Mas-Colell (1977), and Cheng (1981).
Shapley's combinatorial representation of the Shapley value is embodied in a formula that gives each player his expected marginal contribution to the set of players that precede him, where the expectation is taken with respect to the uniform distribution over the set of all orders of the players. We obtain alternative combinatorial representations that are based on allocating to each player the average relative payoff of coalitions that contain him, where one averages first over the sets of fixed cardinality that contain the player and then averages over the different cardinalities. Different base levels in comparison to which relative payoffs are evaluated yield different combinatorial formulas.
Introduction
The familiar representation of the Shapley value gives each player his “average marginal contribution to the players that precede him,” where averages are taken with respect to all potential orders of the players; see Shapley (1953). This chapter looks at three alternative representations of the Shapley value, each expressing the idea that a player gets the “average relative payoff to coalitions that contain him.” The common feature of the three representations we obtain is the way averages are taken, whereas the distinctive feature is the base level in comparison to which relative payoffs are evaluated.
Among the obligations facing a community of scholars is to make accessible to a wider community the ideas it finds useful and important. A related obligation is to recognize lasting contributions to ideas and to honor their progenitors. In this volume we undertake to fill part of both obligations.
The papers in this volume review and continue research that has grown out of a remarkable 1953 paper by Lloyd Shapley. There he proposed that it might be possible to evaluate, in a numerical way, the “value” of playing a game. The particular function he derived for this purpose, which has come to be called the Shapley value, has been the focus of sustained interest among students of cooperative game theory ever since. In the intervening years, the Shapley value has been interpreted and reinterpreted. Its domain has been extended and made more specialized. The same value function has been (re)derived from apparently quite different assumptions. And whole families of related value functions have been found to arise from relaxing various of the assumptions.
The reason the Shapley value has been the focus of so much interest is that it represents a distinct approach to the problems of complex strategic interaction that game theory seeks to illuminate.
At the foundation of the theory of games is the assumption that the players of a game can evaluate, in their utility scales, every “prospect” that might arise as a result of a play. In attempting to apply the theory to any field, one would normally expect to be permitted to include, in the class of “prospects,” the prospect of having to play a game. The possibility of evaluating games is therefore of critical importance. So long as the theory is unable to assign values to the games typically found in application, only relatively simple situations—where games do not depend on other games—will be susceptible to analysis and solution.
In the finite theory of von Neumann and Morgenstern difficulty in evaluation persists for the “essential” games, and for only those. In this note we deduce a value for the “essential” case and examine a number of its elementary properties. We proceed from a set of three axioms, having simple intuitive interpretations, which suffice to determine the value uniquely.
Our present work, though mathematically self-contained, is founded conceptually on the von Neumann—Morgenstern theory up to their introduction of characteristic functions. We thereby inherit certain important underlying assumptions: (a) that utility is objective and transferable; (b) that games are cooperative affairs; (c) that games, granting (a) and (b), are adequately represented by their characteristic functions.
The study of methods for measuring the “value” of playing a particular role in an n-person game is motivated by several considerations. One is to determine an equitable distribution of the wealth available to the players through their participation in the game. Another is to help an individual assess his prospects from participation in the game.
When a method of valuation is used to determine equitable distributions, a natural defining property is “efficiency”: The sum of the individual values should equal the total payoff achieved through the cooperation of all the players. However, when the players of a game individually assess their positions in the game, there is no reason to suppose that these assessments (which may depend on subjective or private information) will be jointly efficient.
This chapter presents an axiomatic development of values for games involving a fixed finite set of players. We primarily seek methods for evaluating the prospects of individual players, and our results center around the class of “probabilistic” values (defined in the next section). In the process of obtaining our results, we examine the role played by each of the Shapley axioms in restricting the set of value functions under consideration, and we trace in detail (with occasional excursions) the logical path leading to the Shapley value.
One of the main axioms that characterizes the Shapley value is the axiom of symmetry. However, in many applications the assumption that, except for the parameters of the games the players are completely symmetric, seems unrealistic. Thus, the use of nonsymmetric generalizations of the Shapley value was proposed in such cases.
Weighted Shapley values were discussed in the original Shapley (1953a) Ph.D. dissertation. Owen (1968, 1972) studied weighted Shapley values through probabilistic approaches. Axiomatizations of nonsymmetric values were done by Weber (Chapter 7 this volume), Shapley (1981), Kalai and Samet (1987), and Hart and Mas-Colell (1987).
Consider, for example, a situation involving two players. If the two players cooperate in a joint project, they can generate a unit profit that is to be divided between them. On their own they can generate no profit. The Shapley value views this situation as being symmetric and would allocate the profit from cooperation equally between the two players. However, in some applications lack of symmetry may be present. It may be, for example, that for the project to succeed, a greater effort is needed on the part of player 1 than on the part of player 2. Another example arises in situations where player 1 represents a large constituency with many individuals and player 2's constituency is small (see, for example, Kalai 1977 and Thomson 1986).
In 1954 Lloyd Shapley and Martin Shubik published a short paper [12] in the American Political Science Review, proposing that the specialization of the Shapley value to simple games could serve as an index of voting power. That paper has been one of the most frequently cited articles in social science literature of the past thirty years, and its “Shapley—Shubik power index” has become widely known. Shapley and Shubik explained the index as follows:
There is a group of individuals all willing to vote for some bill. They vote in order. As soon as a majority has voted for it, it is declared passed, and the member who voted last is given credit for having passed it. Let us choose the voting order of the members randomly. Then we may compute the frequency with which an individual … is pivotal. This latter number serves to give us our index. It measures the number of times that the action of the individual actually changes the state of affairs. …
Of course, the actual balloting procedure used will in all probability be quite different from the above. The “voting” of the formal scheme might better be thought of as declarations of support for the bill and the randomly chosen order of voting as an indication of the relative degrees of support by the different members, with the most enthusiastic members “voting” first, etc.[…]
This chapter attempts to give a structural interpretation to the distributed lag of sales on investment at the two–digit level in U.S. manufacturing. It first presents a simple model that captures the various sources of lags and their respective implications. It then estimates the model using both data on investment and sales as well as direct information on the sources of lags. The spirit of the chapter is exploratory; the model is used mainly as a vehicle to construct, present, and interpret the data.
Lags in the response of investment expenditures to sales can be attributed to four main sources. The first is expectations. Investment depends on future sales, which themselves depend on current and past sales. The next two come from technology. One, costs of adjustment, is internal to the firm. The other, delivery lags, is external to the firm. Together they imply that the firm is neither willing nor able to adjust its capital stock completely and instantaneously to movements in sales. The last source is financial. Although the theory describes investment orders, data are about investment expenditures, which are related to orders by a distributed lag. Section 1 presents a model that incorporates these four sources explicitly and shows their respective implications.
Section 2 presents the basic investment and sales characteristics for 13 industries. It estimates a reduced–form relation of investment on sales and the capital stock, showing common patterns and differences across industries.
The rapid growth of dynamic economic theory has contributed to the increasing use of nonlinear regression in applied time series econometrics. The interest in tractable theories of estimation and hypothesis testing for such nonlinear time series models has grown accordingly. The body of theory relevant to parametric structures that has emerged over the past few years is quite elegant and virtually complete in some respects.
Standard limiting results, such as the asymptotic normality of an estimator or the chi–squared distribution of a test statistic for testing restrictions, rely on trade–offs between moment assumptions and assumptions regarding the dependence of the stochastic process being considered. In particular, most current theories that allow deviations from stationarity assumptions rely on dependence concepts stronger than the ergodic property appropriate for stationary structures. Stationarity and/or such restrictions on the dependence properties of the involved stochastic processes are notoriously hard to verify in nonlinear models.
The problem addressed in this chapter rests on a potentially more serious consideration than difficulties in the verification of assumptions. Recent investigations of the stochastic properties of decision variables arising from various classes of dynamic models reveal that the stochastic processes describing such decision variables can indeed fail to have the ergodic property. Theoretical results obtained by Chamberlain and Wilson (1984) for a wide class of intertemporal consumption plans indicate this. Empirical examples illustrating explosiveness in models of consumption include the work of Hall (1978) and Daly and Hadjimatheou (1981).
Recently there has been much interest in chaotic dynamical systems and empirical tests on time series for the presence of deterministic chaos. A survey and exposition of some of this activity and especially empirical methodology is contained in Barnett and Chen (this volume) and Brock (1986). Deterministic chaos can look random to the naked eye and to some statistical tests such as spectral analysis.
In Brock and Dechert (1986) it is shown how a simple map from the closed interval [−½,½] to itself can be used to generate a time series of pseudorandom numbers {xt} that are uniformly distributed on [−½,½]. This is well known. Brock and Dechert also show how this time series generates a Hilbert space of pseudorandom variables that is mean–square norm isometrically isomorphic to the Hilbert space of random variables generated by white noise {∈t}, where {∈t} is an independent and identically distributed (i.i.d.) sequence of random variables. Since the Wold decomposition theorem implies that a large class of stationary stochastic processes can be represented as a moving average of white noise (Anderson 1971, pp. 420–1), the Brock and Dechert result shows that examination of the empirical spectrum or, equivalently, the empirical autocovariance function of a time series {at} cannot tell the analyst whether {at} was generated by a deterministic mechanism or a stochastic mechanism. Something else is needed to distinguish deterministic from random systems.