To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
JP is a 52-year-old man with a history of mixed histiocytic–lymphocytic lymphoma of the stomach, diagnosed in 1975. He was treated with surgical excision and postoperative radiotherapy and had no clinical evidence of recurrence five years later. In the spring of 1980, four months prior to the present admission, he developed fatigue and pallor. Hematologic evaluation resulted in the diagnosis of acute myelogenous leukemia.
On the day of admission, the patient developed fever and chills. Physical examination was remarkable only for a temperature of 39°C and general pallor. Laboratory evaluation revealed a hemoglobin level of 8.5 g per deciliter, a white blood cell count of 35,000 per cubic centimeter with 35% myeloblasts, and a platelet count of 108,000 per cubic centimeter. Tobramycin and ticarcillin were administered. Subsequently, two blood cultures were positive for Staphylococcus epidermidis; oxacillin was also administered. On the second hospital day, antileukemic therapy with cytosine arabinoside and daunorubicin was begun. Cotrimoxazole was given prophylactically.
After an initial defervescence, the patient developed severe dysphagia and recurrent fever. Barium swallow suggested the presence of esophageal ulcers, consistent with, but not diagnostic of, Candida esophagitis. At that time, all other clinical and laboratory data failed to disclose a source of infection. The patient's white blood cell count during this episode was 6,000 per cubic centimeter with 8% polymorphonuclear leukocytes, 50% lymphocytes, and 40% blast forms.
The modern theory of decision making under risk emerged from a logical analysis of games of chance rather than from a psychological analysis of risk and value. The theory was conceived as a normative model of an idealized decision maker, not as a description of the behavior of real people. In Schumpeter's words, it “has a much better claim to being called a logic of choice than a psychology of value” (1954, p. 1058).
The use of a normative analysis to predict and explain actual behavior is defended by several arguments. First, people are generally thought to be effective in pursuing their goals, particularly when they have incentives and opportunities to learn from experience. It seems reasonable, then, to describe choice as a maximization process. Secondly, competition favors rational individuals and organizations. Optimal decisions increase the chances of survival in a competitive environment, and a minority of rational individuals can sometimes impose rationality on the whole market. Thirdly, the intuitive appeal of the axioms of rational choice makes it plausible that the theory derived from these axioms should provide an acceptable account of choice behavior.
The thesis of the present article is that, in spite of these a priori arguments, the logic of choice does not provide an adequate foundation for a descriptive theory of decision making. We argue that the deviations of actual behavior from the normative model are too widespread to be ignored, too systematic to be dismissed as random error, and too fundamental to be accommodated by relaxing the normative system.
Jacob Bronowski (1978, pp. 78–9) tells the following story about Bertrand Russell, who is reputed once to have said at a dinner party:
“Oh, it is useless talking about inconsistent things, from an inconsistent proposition you can prove anything you like!”… Someone at the dinner table said, “Oh, come on!” He said, “Well, name an inconsistent proposition” and the man said, “Well, what shall we say, 2 = 1.” “All right,” said Russell, “what do you want me to prove?” The man said, “I want you to prove that you are the pope.” “Why,” said Russell, “the pope and I are two, but two equals one, therefore the pope and I are one.”
Now consider the following from Emerson's essay, “Self-reliance”:
A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. (1883, p. 58)
The above passages reflect the conflict that lies at the heart of current research in behavioral decision theory – namely, the importance of consistency in following rules, axioms, and the like, versus abandoning rules in particular cases when judgments and choices seem to imply a “foolish consistency.” The present addendum to our review of behavioral decision theory considers this issue briefly (for more details, see Einhorn and Hogarth, 1981). We begin by examining one example of the recent work on “cognitive illusions” (Tversky and Kahneman, 1981).
A dynamic system is constructed to model a possible negotiation process for players facing a (not necessarily convex) pure bargaining game. The critical points of this system are the points where the “Nash product” is stationary. All accumulation points of the solutions of this system are critical points. It turns out that the asymptotically stable critical points of the system are precisely the isolated critical points where the Nash product has a local maximum.
Introduction
J. F. Nash (1950) introduced his famous solution for the class of two-person pure bargaining convex games. His solution was defined by a system of axioms that were meant to reflect intuitive considerations and judgments. The axioms produced a unique one-point solution that turned out to be that point at which the “Nash product” is maximized. Harsanyi (1959) extended Nash's ideas and obtained a similar solution for the class of n-person pure bargaining convex games. (See also Harsanyi 1977, chap. 10.)
Harsanyi (1956) also suggested a procedure, based on the Zeuthen principle, that modeled a possible bargaining process that leads the players to the Nash—Harsanyi point. (See also Harsanyi 1977, chap. 8.)
Recently, in an elegant paper, T. Lensberg (1981) (see also Lensberg 1985) demonstrated that the Nash—Harsanyi point could be characterized by another system of axioms.
In the following paper we offer a method for the a priori evaluation of the division of power among the various bodies and members of a legislature or committee system. The method is based on a technique of the mathematical theory of games, applied to what are known there as “simple games” and “weighted majority games.” We apply it here to a number of illustrative cases, including the United States Congress, and discuss some of its formal properties.
The designing of the size and type of a legislative body is a process that may continue for many years, with frequent revisions and modifications aimed at reflecting changes in the social structure of the country; we may cite the role of the House of Lords in England as an example. The effect of a revision usually cannot be gauged in advance except in the roughest terms; it can easily happen that the mathematical structure of a voting system conceals a bias in power distribution unsuspected and unintended by the authors of the revision. How, for example, is one to predict the degree of protection which a proposed system affords to minority interests? Can a consistent criterion for “fair representation” be found? It is difficult even to describe the net effect of a double representation system such as is found in the U.S. Congress (i.e., by states and by population), without attempting to deduce it a priori.
Imagine that you will shortly be asked to consciously select one of several urns and from this urn you will then be asked to randomly select one ball. For the moment, we assume that each urn contains exactly N balls (say, 1,000), and each ball in the selected urn is equally likely to be chosen. Each ball has a number on it which specifies the incremental monetary return to you for drawing that ball.
Suppose that you have the opportunity to examine the balls and their numbers in all the urns before deciding upon your choice of urn. How would you use that opportunity? A useful answer to this question would have to take some account of the length of time available for your examination. Here we will adopt the view that time is available for any extensive analysis that you would care to make.
We will present three different but related techniques for choosing among urns. In order to decide when and for whom these techniques are appropriate, we shall discuss various behavioral assumptions that underlie each of these techniques. In mathematical parlance, we shall discuss necessary and sufficient behavioral assumptions that justify each of these techniques.
Social scientists have become increasingly concerned with their possible responsibility to question and to change the status quo (Dahrendorf, 1958; Deutch and Hornstein, 1975; Habermas, 1972; Lazarsfeld and Rietz, 1975; Mitroff and Kilman, 1978; Moscovici, 1972). A need for research on “liberating alternatives” is being expressed. Examining this literature, one finds a dearth of research on how to implement the “liberating alternatives” suggested by the social scientists.
Implementation has often been considered “applied” or “practical,” thereby delegating it to the domain of vocational activities, a domain that scientists rarely have supported. Recently, however, a recognition has developed that there are very powerful intellectual issues in moving from ideas to action (Lindblom and Cohen, 1979). It is the purpose of this paper to explore some of the “individual” factors that will make implementing liberating alternatives difficult.
The paper contains two interrelated arguments. Individuals, acting as agents for various kinds of organizations, must do the actual implementing. They bring to this task theories of action (probably learned early in their lives) which when used correctly will be counterproductive to implementing liberating alternatives. While acting, individuals are unaware of the counterproductivity of their actions. The unawareness is due to their culturally learned theories of action. The word “individual” above was placed in quotes because although individuals may do the implementing, the theories of action in their heads–the theories that they will use – are, I suggest, examples of massive socialization processes.
Economists generally attribute considerable rationality to the agents in their models. The recent popularity of rational expectations models is more an example of a general tendency than a radical departure. Since rationality is simply assumed, there is little in the literature to suggest what would happen if some agents were not rational.
Do people solve inferential problems in everyday life by using abstract inferential rules or do they use only rules specific to the problem domain? The view that people possess abstract inferential rules and use them to solve even the most mundane problems can be traced back to Aristotle. In modern psychology, this view is associated with the theories of Piaget and Simon. They hold that, over the course of cognitive development, people acquire general and abstract rules and schemas for solving problems. For example, people acquire rules that correspond to the laws of formal logic and the formal rules of probability theory. Problems are solved by decomposing their features and relations into elements that are coded in such a way that they can make contact with these abstract rules.
This formalist view has been buffeted by findings showing that people violate the laws of formal logic and the rules of statistics. People make serious logical errors when reasoning about arbitrary symbols and relations (for a review, see Evans, 1982). The best known line of research is that initiated by Wason (1966) on his selection task. In that task, subjects are told that they will be shown cards having a letter on the front and a number on the back. They are then presented with cards having an A,a B,a 4, and a 7 and asked which they would have to turn over in order to verify the rule, “If a card has an A on one side, then it has a 4 on the other.”
Your boss tells you that he is delighted with your performance over the past year and is giving you a $5,000 bonus. Are you pleased? If you were not expecting a bonus, you will be delighted. If you were expecting a $10,000 bonus, you will be disappointed. The satisfaction you feel with the bonus you are given will depend on your prior expectations. The higher your expectations, the greater will be your disappointment. People who are particularly averse to disappointment may learn to adopt a pessimistic view about the future.
If you accept a 50–50 gamble between $0 and $2,000, there is a 50% chance that you will be disappointed when the lottery is resolved. You may prefer to swap the lottery ticket for a sure $950 not so much because of arguments about decreasing marginal value, but because doing so removes the possibility of disappointment. Of course, someone who feels that the “thrill of victory” is worth the possible “agony of defeat” may take the opposite choice.
Disappointment, then, is a psychological reaction to an outcome that does not match up to expectations. The greater the disparity, the greater the disappointment. We will use the word elation to describe the euphoria associated with an outcome that exceeds expectations. Decision makers who anticipate these feelings may take them into account when comparing uncertain alternatives.
This chapter is concerned with how the Shapley value can be interpreted as an expected utility function, the consequences of interpreting it in this way, and with what other value functions arise as utility functions representing different preferences.
These questions brought themselves rather forcefully to my attention when I first taught a graduate course in game theory. After introducing utility theory as a way of numerically representing sufficiently regular individual preferences, and explaining which comparisons involving utility functions are meaningful and which are not, I found myself at a loss to explain precisely what comparisons could meaningfully be made using the Shapley value, if it was to be interpreted as a utility as suggested in the first paragraph of Shapley's 1953 paper. In order to state the problem clearly, it will be useful to remark briefly on some of the familiar properties of utility functions.
First, utility functions represent preferences, so individuals with different preferences will have different utility functions. When preferences are measured over risky as well as riskless prospects, individuals who have the same preferences over riskless prospects may nevertheless have different preferences over lotteries, and so may have different expected utility functions.
Second, there are some arbitrary choices involved in specifying a utility function, so the information contained in an individual's utility function is really represented by an equivalence class of functions.
The standard model of choice utilized by decision scientists in analyzing problems is expected utility (EU) theory. This model is presumed to be descriptive of people's basic preferences, while having normative implications for more complex problems. Recently, however, there has been an extensive literature which suggests that even basic choice is more complicated than utility theory suggests. (see for a review). In view of this, this chapter presents a framework for systematically investigating biases stemming from various information processing limitations. We define bias, for this purpose, as a violation of the EU axioms. The experimental data presented in this study, together with a large body of existing evidence, lead us to the conclusion that traditional EU theory may have to be modified if it is to serve as a descriptive and normative model of choice under uncertainty.
Our analysis was, in part, motivated by a recent article of Fishburn and Kochenberger who analyzed 30 empirical utility functions published in earlier literature. These plotted utility functions were defined on net present values, returns on investments, or simply net monetary gain or loss. Some studies used business contexts, some personal and others both. Fishburn and Kochenberger (F–K) divided each graph into a below- and above-target segment, and fitted linear, power, and exponential functions separately to each subset of data.
The study of coalition structures has been seriously explored only recently. Coalition structures are already implicit in the von Neumann—Morgenstern (1944) solutions; because of their internal and external stability the solutions isolate those stable coalition structures that generate the final payoffs. This is to be contrasted with the extensive subsequent literature on the core in which “blocking” is merely a criterion for accepting or rejecting a proposed allocation; no specific coalition structure is implicit in any core allocation. Analysis of games in partition function form (see, for example, Thrall and Lucas 1963) is a more explicit way of studying restrictions on coalition structures. Perhaps the best-studied class of games whose core of a coalition structure was investigated is the “central assignment games” (see, for example, Shapley and Shubik 1972; Kaneko 1982; and Quinzii 1984). This class includes the particular case of the “marriage games” (Gale and Shapley 1962) and is closely related to the various variants of the “job matching games” (see, for example, Crawford and Knoer 1981; Kelso and Crawford 1982; Roth 1984a,b). The nonemptiness of the core of the coalition structure of these games is an important result, to which we return in Section 3.
Jim LeBlanc phoned Steve Baum, who formerly worked in his division, to ask about the CEO's new corporate task force on quality control that wanted to meet with Jim. Jim, the head of the industrial equipment division of Tanner Corporation, thought that Steve, now director of technology, could help him figure out why the task force wanted to meet with him in two weeks.
“It's because you're doing so damn well down there, boss!” Steve replied.
“Gee, thanks. By the way, Steve, what's the agenda for Singer's staff meeting for next week?” (Singer was the president and Jim's boss.)
“Well, we're going to talk about the reorganization and look at the overhead reduction figures for each division. Then Singer's going to report on last week's executive committee meeting and his trip to Japan.”
“How did it go?”
“His telex from Osaka sounded enthusiastic, but he just got in last night and I haven't seen him yet.”
“Well,” said Jim, “I guess we'll just have to see, but, if you hear something, call me right away because if Osaka comes through I'm going to have to hustle to get ready, and you know how Bernie hates to shake it. Now, about the task force..”
In the space of three minutes, Jim LeBlanc got a lot done.
An article of faith among students of value, choice, and attitude judgments is that people have reasonably well-defined opinions regarding the desirability of various events. Although these opinions may not be intuitively formulated in numerical (or even verbal) form, careful questioning can elicit judgments representing people's underlying values. From this stance, elicitation procedures are neutral tools, bias-free channels that translate subjective feelings into scientifically usable expressions. They impose no views on respondents beyond focusing attention on those value issues of interest to the investigator.
What happens, however, in cases where people do not know, or have difficulty appraising, what they want? Under such circumstances, elicitation procedures may become major forces in shaping the values expressed, or apparently expressed, in the judgments they require. They can induce random error (by confusing the respondent), systematic error (by hinting at what the “correct” response is), or unduly extreme judgments (by suggesting clarity and coherence of opinion that are not warranted). In such cases, the method becomes the message. If elicited values are used as guides for future behavior, they may lead to decisions not in the decision maker's best interest, to action when caution is desirable (or the opposite), or to the obfuscation of poorly formulated views needing careful development and clarification.
The topic of this chapter is the confrontation between those who hold (possibly inchoate) values and those who elicit values.
The papers in this volume are organized according to a few guiding principles.
The first dichotomy is between theory (20 papers) and application (8 papers).
Within the domain of theory we organized the papers into the following trichotomy:
(a) conceptions of choice (9 papers)
(b) beliefs and judgments about uncertainties (4 papers)
(c) values and utilities (7 papers)
Within each of these categories we arranged the papers according to the following sequence:
(a) decisions people make and how they decide
(b) logically consistent decision procedures and proposals of how people should decide
(c) behavioral objections to normative proposals
(d) how to help people to make better decisions in the light of behavioral realities and normative ideals
(e) how to train people to make better decisions, for example, by providing heuristics and, possibly, therapy
This sequence is motivated and elaborated in the overview paper by the editors (chapter 1).
In the application section, the papers are arranged by fields: economics, management, education, and medicine.
We, the organizers of this conference and its proceedings, in discussions among ourselves about our domain of concern – individual decision making under uncertainty – have found the following taxonomy helpful:
Descriptive
Decisions people make
How people decide
Normative
Logically consistent decision procedures
How people should decide
Prescriptive
How to help people to make good decisions
How to train people to make better decisions
Observe that we have moved from the usual dichotomy (descriptive and normative) to a trichotomy by adding a “prescriptive” category.
The ideas in this chapter grow out of a simple view of what the function of decision-analytic tools is: to help a decision maker with a set of problems. (This chapter uses the label “decision maker” to refer either to an individual or to a coherent group.)
The thrust of this view is easier to understand if we add a list of other things that could have been said. Decision-analytic tools are not intended primarily for any of the following purposes:
Capturing intuitive preferences.
Modeling future preferences.
Modeling environmental processes.
Embodying axiomatic or methodological rigor, or conforming to axiomatic structures.
Decision analysts often defend and try to implement these four goals, for the excellent reason that attainment of each of them often helps. But they derive their merit from the primary goal of helping decision makers with problems, not the other way around.
Unfortunately, each of these subgoals can lead to results that hinder, rather than facilitate, attainment of the main goal. Capturing intuitive preferences in detail can lead to complicated, hard-to-understand, hard-to-communicate elicitation methods and models. Often the decision-analytic procedures form future preferences, or even help invent options about which to have preferences, rather than modeling preferences. Modeling of environmental processes is very useful for many technological problems, but its appearance of objectivity can create or nurture myths.