To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A voting rule compromises between the voters' conflicting claims by picking a single outcome from each preference profile. In his very influential book, Arrow [1963] proposed a more ambitious goal to the social planner, that of aggregating the preference profile into a complete ordering of outcomes. This ordering is meant to reflect the level of social welfare at any one of the outcomes, including the suboptimal ones. If exogenous constraints prevent the planner from implementing a socially best outcome [i.e., a maximal outcome of the social welfare ordering (SWO)], the social ordering allows him to distinguish the welfare-improving changes of outcomes from those that decrease social welfare.
The thrust of Arrow’s approach is his axiom independence of irrelevant alternatives (AHA). It states that the (social) welfare comparisons within any given subset of outcomes should not depend upon individual preferences outside this subset. Hence, AHA limits the information one may use when comparing two outcomes a and b: The voter’s preferences among these two (who prefers a to b, who prefers b to #, who is indifferent?) should be all that matters to form the social preference about a, b. The bite of the axiom is to reduce the task of defining an ordering of all outcomes to that of solving all pairwise comparisons and to check if those comparisons together form a transitive SWO. But pairwise comparisons are easy to solve.
Suppose the public decision maker (hereafter called the planner) faces a specific cost- or surplus-sharing problem and has made up his mind about the just outcome (e.g., he adopts one of the five paramount methods for sharing the cost of a public good discussed in Chapter 6). He still has one difficulty to solve before his favorite solution is implemented, namely, he must elicit from individual agents a report of their preferences (in this particular example, he must find out about the individual benefits from consuming the public good).
Information about individual preferences is fundamentally private to the agent himself. Even if I have clear evidence that you prefer wine over beer, I cannot deny your right of pretending to the contrary.
Thus, in a legal and practical sense, all information about preferences must emanate from the concerned agents themselves. This implies that an agent can influence the outcome of the mechanism by falsifying his preferences (e.g., by understating or overstating the benefit he derives from the public good). Of course, he will manipulate in such fashion only when it is in his interest to do so.
The “marginality principle” states that the share of joint output atrributable to any single factor of production should depend only on that factor's own contribution to output. This property, together with symmetry and efficiency, uniquely determines the Shapley value. A similar result characterizes Aumann—Shapley pricing for smooth production functions with variable input levels.
Introduction
In a perfectly competitive market, the wage of a laborer equals his marginal product. No ethical judgment need be made as to whether marginal productivity is a “just” rule of compensation so long as competitive markets are accepted as the correct form of economic organization. Nevertheless, the idea that rewards should be in proportion to contributions has considerable ethical appeal in itself, and appears to reflect widely held views about what constitutes “just compensation” without any reference to the theory of perfect competition.
In this paper we shall ask what “compensation in accordance with contribution” means in the absence of competition. How does the marginality principle translate into a rule of distributive justice when cooperation rather than competition is the mode of economic organization?
Unfortunately, if we attempt to translate marginalism directly into a cooperative sharing rule, difficulties arise. For, except in very special cases, the sum of individuals' marginal contributions to output will not equal total output.
It has been much remarked that different solutions become equivalent in the setting of economies in which there is “perfect competition”; that is, no individual can affect the overall outcome. The conjecture that the core coincides with competitive (Walras) allocations was made as far back as 1881 by Edgeworth [11]. His insight has been confirmed in increasing generality in a series of papers [20,9,2,14,13,7,8,1] over the last three decades. Another line of inquiry originated with the recent introduction of a value for games by Shapley [17]. It was found that this also coincided with the above two [19,6,3].
The equivalence phenomenon is striking in view of the fact that these solutions are posited on entirely different grounds. If we restrict ourselves to smooth, transferable utilities, then the result is even sharper: not only do the solutions coincide, but they are also unique (i.e., consist of a single payoff). Our aim here is to give another view of this “coincident payoff” by putting it on an axiomatic foundation. As an upshot of our approach, we get a “metaequivalence” theorem, by way of a categorization: Any solution coincides with this payoff if and only if it satisfies our axioms.
The transferable utility assumption is undoubtedly restrictive.
A dynamic system is constructed to model a possible negotiation process for players facing a (not necessarily convex) pure bargaining game. The critical points of this system are the points where the “Nash product” is stationary. All accumulation points of the solutions of this system are critical points. It turns out that the asymptotically stable critical points of the system are precisely the isolated critical points where the Nash product has a local maximum.
Introduction
J. F. Nash (1950) introduced his famous solution for the class of two-person pure bargaining convex games. His solution was defined by a system of axioms that were meant to reflect intuitive considerations and judgments. The axioms produced a unique one-point solution that turned out to be that point at which the “Nash product” is maximized. Harsanyi (1959) extended Nash's ideas and obtained a similar solution for the class of n-person pure bargaining convex games. (See also Harsanyi 1977, chap. 10.)
Harsanyi (1956) also suggested a procedure, based on the Zeuthen principle, that modeled a possible bargaining process that leads the players to the Nash—Harsanyi point. (See also Harsanyi 1977, chap. 8.)
Recently, in an elegant paper, T. Lensberg (1981) (see also Lensberg 1985) demonstrated that the Nash—Harsanyi point could be characterized by another system of axioms.
In the following paper we offer a method for the a priori evaluation of the division of power among the various bodies and members of a legislature or committee system. The method is based on a technique of the mathematical theory of games, applied to what are known there as “simple games” and “weighted majority games.” We apply it here to a number of illustrative cases, including the United States Congress, and discuss some of its formal properties.
The designing of the size and type of a legislative body is a process that may continue for many years, with frequent revisions and modifications aimed at reflecting changes in the social structure of the country; we may cite the role of the House of Lords in England as an example. The effect of a revision usually cannot be gauged in advance except in the roughest terms; it can easily happen that the mathematical structure of a voting system conceals a bias in power distribution unsuspected and unintended by the authors of the revision. How, for example, is one to predict the degree of protection which a proposed system affords to minority interests? Can a consistent criterion for “fair representation” be found? It is difficult even to describe the net effect of a double representation system such as is found in the U.S. Congress (i.e., by states and by population), without attempting to deduce it a priori.
This chapter is concerned with how the Shapley value can be interpreted as an expected utility function, the consequences of interpreting it in this way, and with what other value functions arise as utility functions representing different preferences.
These questions brought themselves rather forcefully to my attention when I first taught a graduate course in game theory. After introducing utility theory as a way of numerically representing sufficiently regular individual preferences, and explaining which comparisons involving utility functions are meaningful and which are not, I found myself at a loss to explain precisely what comparisons could meaningfully be made using the Shapley value, if it was to be interpreted as a utility as suggested in the first paragraph of Shapley's 1953 paper. In order to state the problem clearly, it will be useful to remark briefly on some of the familiar properties of utility functions.
First, utility functions represent preferences, so individuals with different preferences will have different utility functions. When preferences are measured over risky as well as riskless prospects, individuals who have the same preferences over riskless prospects may nevertheless have different preferences over lotteries, and so may have different expected utility functions.
Second, there are some arbitrary choices involved in specifying a utility function, so the information contained in an individual's utility function is really represented by an equivalence class of functions.
The study of coalition structures has been seriously explored only recently. Coalition structures are already implicit in the von Neumann—Morgenstern (1944) solutions; because of their internal and external stability the solutions isolate those stable coalition structures that generate the final payoffs. This is to be contrasted with the extensive subsequent literature on the core in which “blocking” is merely a criterion for accepting or rejecting a proposed allocation; no specific coalition structure is implicit in any core allocation. Analysis of games in partition function form (see, for example, Thrall and Lucas 1963) is a more explicit way of studying restrictions on coalition structures. Perhaps the best-studied class of games whose core of a coalition structure was investigated is the “central assignment games” (see, for example, Shapley and Shubik 1972; Kaneko 1982; and Quinzii 1984). This class includes the particular case of the “marriage games” (Gale and Shapley 1962) and is closely related to the various variants of the “job matching games” (see, for example, Crawford and Knoer 1981; Kelso and Crawford 1982; Roth 1984a,b). The nonemptiness of the core of the coalition structure of these games is an important result, to which we return in Section 3.
TU economies with private production are shown to have a value, as defined in Mertens (1988), without any differentiability or interiority or other restriction. An explicit formula is given, describing the value as a barycenter of the core, for a probability distribution depending only on the set of net trades at equilibrium.
Introduction
We prove existence and exhibit the formula for the value of transferable utility markets, getting rid of differentiability assumptions and allowing for private production (as a first step toward removing the Aumann—Perles assumptions and the monotonicity assumptions). Under differentiability assumptions, the treatment by Aumann and Shapley yielded equivalence with the core. In the nondifferentiable case the more powerful value constructed in Mertens (1988) is required. In particular, whereas the differentiable case uses the symmetry axiom only in a “first-order” sense—comparable to the strong law of large numbers—in the nondifferentiable case it is used in its full force, in a “second-order” sense—comparable to the central limit theorem. But contrary to the case of the central limit theorem, no normal distribution appears here. Indeed, as shown in Hart (1980), the formulas involving those would satisfy only a restricted symmetry property (and are not characterized by it), so no value would be obtained.
In their book Values of Non-Atomic Games, Aumann and Shapley [1] define the value for spaces of nonatomic games as a map from the space of games into bounded finitely additive games that satisfies a list of plausible axioms: linearity, symmetry, positivity, and efficiency. One of the themes of the theory of values is to demonstrate that on given spaces of games this list of plausible axioms determines the value uniquely. One of the spaces of games that have been extensively studied is pNA, which is the closure of the linear space generated by the polynomials of nonatomic measures. Theorem B of [1] asserts that a unique value Ø exists on pNA and that ||Ø|| = 1. This chapter introduces a canonical way to approximate games in pNA by games in pNA that are “identified” with finite games. These are the multilinear nonatomic games—that is, games v of the form v = F° (μ1,μ2, …, μn), where F is a multilinear function and μ1, μ2, …, μn are mutually singular nonatomic measures.
The approximation theorem yields short proofs to classic results, such as the uniqueness of the Aumann—Shapley value on pNA and the existence of the asymptotic value on pNA (see [1, Theorem F]), as well as short proofs for some newer results such as the uniqueness of the μ value on pNA(μ) (see [4]).
The theory of values of nonatomic games as developed by Aumann and Shapley was first applied by Billera, Heath, and Raanan (1978) to set equitable telephone billing rates that share the cost of service among users. Billera and Heath (1982) and Mirman and Tauman (1982a) “translated” the axiomatic approach of Aumann and Shapley from values of nonatom games to a price mechanism on the class of differentiable cost functions and hence provided a normative justification, using economic terms only, for the application of the theory of nonatomic games to cost allocation problems. New developments in the theory of games inspired parallel developments to cost allocation applications. For instance, the theory of semi-values by Dubey, Neyman, and Weber (1981) inspired the work of Samet and Tauman (1982), which characterized the class of all “semi-price” mechanisms (i.e., price mechanisms that do not necessarily satisfy the break-even requirement) and led to an axiomatic characterization of the marginal cost prices. The theory of Dubey and Neyman (1984) of nonatomic economies inspired the work by Mirman and Neyman (1983) in which they characterized the marginal cost prices on the class of cost functions that arise from long-run production technologies. Young's (1984) characterization of the Shapley value by the monotonicity axiom inspired his characterization (Young 1985a) of the Aumann—Shapley price mechanism on the class of differentiable cost functions.