To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We will first consider only vocal cooperative games. We make the following assumptions. The two players can achieve any payoff vector u = (u1, u2) within the payoff space P of the game, if they can agree which particular payoff vector u to adopt, i.e., if they can agree how to divide the payoffs between them. The players are free to use jointly randomized mixed strategies, which make the payoff space P a convex set. Moreover,P is assumed to be bounded and closed, i.e., compact. We also exclude the degenerate case in which the payoff space is a segment of a straight line with ui = const. for either player i: Any such case will always have to be treated as a strictly noncooperative game, since player i would have no incentive whatever to cooperate with the other player.
Among cooperative games we will distinguish two cases, depending on the nature of the conflict situation that would emerge if the two players could not agree on their final payoffs u1 = ū1 and u2 = ū2. In a simple bargaining game the rules of the game themselves fully specify the conflict-payoff vector or conflict point c = (c1, c2) to which the players would be confined in such a conflict situation. This means that the players have essentially only one conflict strategy, viz., simple noncooperation.
In an n-person simple bargaining game the n players have to choose a payoff vector u = (u1,…, un) from a compact and convex set P of possible payoff vectors, called the payoff space of the game. The choice of u must be by unanimous agreement of all n players. If they cannot reach unanimous agreement, then they obtain the conflict payoffs c1, …, cn. The payoff vector c = (c1, …, cn) is called the conflict point of the game. We will assume that c∈P.
That region P* of the payoff space P which lies in the orthant defined by the n inequalities ui ≧ ci for i = 1, …, n, is called the agreement space. Like P itself, P* is always a compact and convex set.
We will exclude the degenerate case where the payoff (s) of some player(s) is (or are) constant over the entire agreement space P*. For in this case this player (or these players) would have on interest in cooperating with the other player(s), and so the game would not be a truly cooperative game.
The set of all points u in the payoff space P undominated, even weakly, by any other point u* in P is called the upper boundary H of P. In other words, H is the set of strongly efficient points in P. In general the payoff space P is a set of n dimensions.
The main concern of this book is with game situations (situations of strategic interdependence), in which the outcome depends on mutual interaction between two or more rational individuals, each of them pursuing his own interests (and his own values) against the other individuals, who are likewise pursuing their own interests (and their own values). In earlier chapters we discussed situations of individual independence (certainty, risk, and uncertainty), in which the outcome depends on the actions of only one individual (and possibly on chance). We also discussed moral situations, in which the outcome does depend on interaction between two or more individuals but in which this outcome and these individuals' actions are evaluated, not in terms of their own individual interests but rather in terms of the interests of society as a whole – as seen by an impartial but sympathetic observer. However, all of this was merely a preliminary to our analysis of game situations.
Following von Neumann and Morgenstern [1944] it is customary to analyze what we call game situations by using parlor games – already existing ones or ones specially constructed for this very purpose – as analytical models. (More specifically, what are used as models are games of strategy, where the outcome depends at least in part on a rational choice of strategy by the participants rather than on mere physical skill or on mere chance.) Hence the term “game situations” and the name “game theory”, for the theory analyzing such situations, arise.
How to define rational behavior (practical rationality) is a philosophical problem of fundamental importance – both in its own right and by virtue of its close connection with the problem of theoretical rationality. The concept of rational behavior is equally fundamental to a number of more specialized disciplines: to normative disciplines such as decision theory (utility theory), game theory, and ethics; and to some positive Social sciences, such as economics and certain, more analytically oriented, versions of political science and of sociology.
This book presents what I believe to be the first systematic attempt to develop a conceptually clear, and quantitatively definite, general theory of rational behavior. No doubt, technically more advanced and philosophically more sophisticated versions of such a theory will soon follow. In fact, the first version of this book was completed in 1963, but game theory has been advancing at a very rapid rate since then, and my own thinking has also been changing. Thus, I have revised this manuscript several times to bring it more in line with new developments, but this process must stop if this material is ever to be published. I hope the reader will bear with me if he finds that this book does not cover some recent results, even some of my own. In such a rapidly growing subject as game theory, only journal articles – indeed, only research reports – can be really up to date.
In this chapter we will define a solution for noncooperative and for almost-noncooperative games (both two-person and n-person). In Section 5.17, we defined a (strictly) noncooperative game as a game in which no agreement between the players has any binding force: Thus any player is free to violate any agreement even if he obtains no positive benefit by doing so. In contrast, we defined an almost-noncooperative game as a game in which the players are bound by any agreement that they are making as long as they cannot obtain any positive benefit by violating it, though they are free to disregard any agreement if they can achieve a positive gain (however small) by doing so.
Accordingly, whereas in a cooperative game any possible strategy n-tuple (and any possible probability mixture of such strategy n-tuples) will be stable once the players have agreed to adopt it, in a noncooperative or almost-noncooperative game only strategy n-tuples satisfying certain special stability requirements – which we call eligible-strategy n-tuples – have sufficient stability to be used by rational players. A strategy n-tuple can be eligible only if it is an equilibrium point or a maximin point (see Sections 5.12 and 5.13). As we have argued, in a strictly noncooperative game a profitable equilibrium point will be eligible only if it is a strong equilibrium point or a centroid equilibrium point, while in an almost-noncooperative game a profitable equilibrium point is always eligible.
In Section 1.3 we briefly summarized the main results of individual decision theory (utility theory). In this chapter we will discuss these results in more detail. Recall that we speak of certainty when any action that the decision maker can take can have only one possible outcome, known in advance. We speak of risk or uncertainty when at least some of the actions available to the decision maker can have two or more alternative outcomes, without his being able to discern which particular outcome will actually arise in any given case.
More particularly we speak of risk when the objective probabilities (long-run frequencies) associated with all possible outcomes are known to the decision maker. We speak of uncertainty if at least some of these objective probabilities are unknown to him (or are not even well defined).
For example, I make a risky decision when I buy a lottery ticket offering known prizes with known probabilities. In contrast, I make an uncertain decision when I bet on horses or when I make a business investment, because in the case of horse races and business investments the objective probabilities of alternative outcomes are not known.
To describe the expected results of any given human action under certainty, risk, and uncertainty, we are introducing the concepts of “sure prospects”, “risky prospects”, and “uncertain prospects”. We are also introducing the term “alternatives” as a common name for sure prospects, risky prospects, and uncertain prospects.
In the preceding chapters I have tried to propose a precise definition – or more exactly a family of precise definitions – for the concept of rational behavior. In the case of individual pragmatic decisions I have argued that rational behavior can be defined in terms of utility maximization, or expected-utility maximization, in accordance with modern decision theory (and modern economic theory). In the case of moral decision I have suggested the utilitarian criterion as the appropriate rationality criterion, involving maximization of the average utility of all individuals in the society.
Finally, in the case of game situations I have argued that we need a concept of rational behavior yielding a determinate solution (i.e., a unique solution payoff vector) for each specific game. For various classes of cooperative and of noncooperative games I have suggested a number of solution concepts, all related to the Nash-Zeuthen bargaining solution, to the modified Shapley value, and to their various generalizations. Though the solution concepts suggested for different game classes have differed in specific detail, all have been based on the same general rationality postulates. My discussion, however, has been restricted to what I have called “classical” games (i.e., to games with complete information, either fully cooperative or fully noncooperative in character, and admitting of representation by their normal form) – even though, as I have shown in other publications, one can extend this analysis also to certain classes of “nonclassical” games (e.g., to games with incomplete information [Harsanyi, 1967, 1968a, 1968b; Harsanyi and Selten, 1972]).
Disregard of one's personal identity – a model for moral value judgments
In Section 1.3 we divided the general theory of rational behavior into individual decision theory, ethics, and game theory. In Chapter 3 we summarized the main results of individual decision theory, following Debreu [1959], Herstein and Milnor [1953], and Anscombe and Aumann [1963]. In this chapter we will review the main results of our own work in ethics and will discuss a related result by Fleming [cf. Harsanyi, 1953,1955, and 1958; Fleming, 1952]. Most of these results were originally developed for the purposes of welfare economics but will be discussed here from a more general ethical point of view. The remaining chapters of this book will deal with game theory.
People often take a friendly (positive) or an unfriendly (negative) interest in other people's well-being. Technically this means that the utility function of a given individual i may assign positive or negative utility to the utility level as such of some other individuals j, or to the objective economic, social, biological, and other conditions determining the latter's utility levels. The question naturally arises: What factors will decide the relative importance that any given individual's utility function will assign to the well-being of various other individuals or social groups? We have called this question the problem of dominant loyalties (Section 2.3). This question obviously requires a rather complicated answer.