To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The last chapter showed how decision problems with many different simultaneous objectives can be addressed using the formal techniques developed earlier in this book. We now turn to a related problem where – as in the last example of that chapter – the processes describing the DM's beliefs is high dimensional. Formally of course this presents no great extension from those described in the early part of this book. The theory leading to expected utility maximising strategies applies just as much to problems where uncertainty is captured through distributions of high-dimensional vectors of random variables as to much simpler ones.
However from the practical point of view a Bayesian decision analysis in this more complicated setting is by no means so straightforward to enact. A joint probability space requires an enormous number of joint prior probabilities to be elicited, often from different domain experts. For the analyst to resource the DM to build a framework that on the one hand faithfully and logically combines the informed descriptions of diverse features of the problem and on the other supports both the calculation of optimal policies and diagnostics to check the continuing veracity of the system presents a significant challenge.
With the increase in electronic data collection and storage many authors have recognised this challenge and developed ways of securely building faithful Bayesian models even when the processes are extremely large.
This book introduces the principles of Bayesian Decision Analysis and describes how this theory can be applied to a wide range of decision problems. It is written in two parts. The first presents what I consider to be the most important principles and good practice in mostly simple settings. The second part shows how the established methodology can be extended so that it can address the sometimes very complex and data-rich structures a decision maker might face. It will serve as a course book for a 30-lecture course on Bayesian decision modelling given to final-year undergraduates with a mathematical core to their degree programme and statistics Master's students at Warwick University. Complementary material given in two parallel courses, one on Bayesian numerical methods and the other on Bayesian Time Series given subsequently at Warwick, is largely omitted although these subjects are motivated within the text. This book contains foundational material on the subjective probability theory and multiattribute utility theory – with a detailed discussion of efficacy of various assumptions underlying these constructs – and quite an extensive treatment of frameworks such as event and decision trees, Bayesian Networks, as well as Influence Diagrams and Causal Bayesian Networks. These graphical methods help draw different aspects of a decision problem together into a coherent whole and provide frameworks where data can be used to support a Bayesian decision analysis.
This is not just a text book; it also provides additional material to help the reader develop a more profound understanding of this fascinating and highly cross-disciplinary subject.
This chapter deals with prospect theory, which generalizes RDU by incorporating loss aversion. It thus integrates utility curvature, probabilistic sensitivity, and loss aversion, the three components of risk attitude.
A symmetry about 0 underlying prospect theory
It is plausible that utility has a kink at zero, and exhibits different properties for gains than for losses. Formally, for a fixed reference point these properties could also be modeled by rank-dependent utility, in the same way as §8.3 does not entail a real departure from final wealth models and expected utility. Prospect theory does generalize rank-dependent utility in one formal respect also for the case of one fixed reference point: It allows for different probability weighting for gains than for losses. Thus, risk attitudes can be different for losses than for gains in every respect.
It is plausible that sensitivity to outcomes and probabilities exhibits symmetries about the reference point. To illustrate this point, we first note that the utility difference U(1020) − U(1010) is usually smaller than the utility difference U(20) − U(10) because the former concerns outcomes farther remote from 0, leading to concave utility for gains. A symmetric reasoning for losses suggests that the utility difference U(−1010) − U(−1020) will be perceived as smaller than the utility difference U(−10) − U(−20). For the former difference, the losses are so big that 10 more does not matter much. This argument suggests convex rather than concave utility for losses, in agreement with many empirical findings (§9.5).
In the preceding chapter we saw how rank dependence can be used to model pessimism and optimism. Another important component of probability weighting, orthogonal to the optimism–pessimism component, and cognitive rather than motivational, concerns likelihood sensitivity. This component is introduced informally in the following section, and is presented formally in §7.7. Several other extensions and applications of rank dependence are given.
Likelihood insensitivity and pessimism as two components of probabilistic risk attitudes
Figures 7.1.1–3 illustrate how two kinds of deviations from additive probabilities combine to create the probability weighting functions commonly found. Fig. 7.1.1a depicts traditional EU with probabilities weighted linearly; i.e., w(p) = p. Fig. 1b depicts pessimism as discussed in the preceding chapter.
Fig. 2a shows another psychological phenomenon. It reflects “diminishing sensitivity” for probabilities, which we will call likelihood insensitivity. Relative to EU, the weighting function is too shallow in the middle region, and too steep near both endpoints. An extreme case is shown in Fig. 3a. Here w is extremely steep at 0 and 1, and completely shallow in the middle. Such behavior is typically found if people distinguish only between “sure to happen,” “sure not to happen,” and “don't know.” An example of such a crude distinction is in Shackle (1949b p. 8). The expression 50–50 is commonly used to express such crude perceptions of uncertainty. “Either it happens or it won't; you can't say more about it.” is another way of expressing such beliefs. No distinction is made between different levels of likelihood.
The main text has considered decision under uncertainty and decision under risk. The results in this book can be reinterpreted for other contexts such as welfare evaluations, intertemporal choice, multiattribute utility, and numerous other applications. In some contexts a quantitative function V on the prospects, rather than a preference relation ≽, is taken as a primitive. These can often be related to each other, where ≽ is the preference relation represented by V and V is for instance the certainty equivalent function generated by the preference relation. Many conditions for preferences can readily be restated in terms of the certainty equivalent function. For simplicity, we assume that S is finite in this appendix. Extensions to general sets S are straightforward.
Prospect theory, defined in the next chapter, adds a new component to classical theories, namely reference dependence, which is the topic of this chapter. This component is of a different nature than concepts we have defined so far. It depends on aspects of framing and entails, I think, a bigger deviation from rationality than probability weighting. It is so volatile that it is hard to model theoretically (Fatas, Neugebauer, & Tamborero 2007; Kühberger, Schulte-Mecklenbeck, & Perner 1999), and much of the handling of reference dependence takes place in the modeling stage preceding the quantitative analyses presented in this book. Hence, up till now hardly any theory has been developed for reference dependence. Nevertheless, this deviation is of major empirical importance. I think that more than half of the risk aversion empirically observed has nothing to do with utility curvature or with probability weighting. Instead, it is generated by loss aversion, the main empirical phenomenon regarding reference dependence. Hence, this chapter will discuss reference dependence, even though, unlike the remainder of this book, it will have little theory and few quantitative assessments, and there will be almost no exercises either.
Before we can discuss reference dependence, two subtle points have to be clarified that have raised much confusion in the literature. First, inconsistencies that can arise between asset integration and isolation for moderate stakes (§8.1) will not be due to inappropriateness of either principle. Rather, they result from another cause: overly strong deviations from risk neutrality for moderate stakes (§8.2).
The general extension of finite-dimensional behavioral foundations to infinite-dimensional ones is discussed in detail by Wakker (1993a). This book does consider infinite state spaces, but confines its attention to prospects with finitely many outcomes. Then the extension of results from finite state spaces to infinite state spaces is not difficult. The procedure will now be explained. We give an explanation for the special case of Theorem 4.6.4 (uncertainty-EU), and an explanation in general for all theorems of this book. For Theorem 4.6.4, explanations between square brackets should be read. For the other results, these explanations should be skipped, or adapted to more general models.
Assume that we have established preference conditions that are necessary and sufficient for the existence of a representation [EU] for finite state spaces S, with a utility function U that is unique up to unit and level. Assume that the behavioral foundation also involves one or more set functions [only one, namely P] defined on the subsets of S – or, for mathematicians, on an algebra of events – and that these set functions are uniquely determined. Besides the probability P in EU such as in Theorem 4.6.4, we deal with a nonadditive measure in rank-dependent utility and with two nonadditive measures in prospect theory. Then this behavioral foundation [of EU] also holds under Structural Assumption 1.2.1 (decision under uncertainty) if S is infinite. The proof is as follows.
This chapter presents the intuition and psychological background of rank-dependent utility. There will be no formal definitions, and the chapter can be skipped if you are only interested in formal theory. After Preston & Baratta (1948) it took 30 years before Quiggin discovered a proper way to transform probabilities, being through rank dependence. After Keynes (1921) and Knight (1921), it even took over 60 years before David Schmeidler discovered a proper way to model uncertainty (the topic of later chapters), through rank dependence. This history shows the depth of the rank-dependent idea, which is why we dedicate this chapter to developing the underlying intuition.
§5.1 presents the important intuition of probabilistic sensitivity, which is an essential component, in addition to utility curvature, to obtain empirically realistic models of risk attitudes. Probabilistic sensitivity underlies all nonexpected utility (nonEU) theories. The question is how to develop a sound decision model that incorporates this component. The rest of the chapter argues that rank-dependent utility can serve as a natural model to obtain such a sound theory. The arguments are based on psychological interpretations and heuristic graphs. These suggest, first, that the rank-dependent formula is natural from a mathematical perspective. They also suggest that the rank-dependent formula matches psychological processes of decision making, in agreement with the homeomorphic approach taken in this book. Heuristic ideas as presented in this chapter may have led John Quiggin and David Schmeidler to invent the rank-dependent model.
This appendix presents figures depicting the relations between the sections in this book. An arrow from one section to another section indicates that, for reading the latter section, the former section should be read first. For example, the figure for Chapter 12 shows that, to read §12.1 with the definition of PT (prospect theory) for unknown probabilities, first §10.2 with the definition of RDU (rank-dependent utility) for unknown probabilities has to be read, along with §9.2 with the definition of PT for risk. The figure for Chapter 10 then shows that, before reading §10.2, first §10.1 has to be read, and the figure for Chapter 9 shows that §8.4 and §7.6 have to be read before §9.2 can be read; and so on. Starting with the section of interest, one first encircles this section, and then all preceding sections needed to read it. One then moves back section by section, for each encircled section along the way encircling the required preceding ones. All encircled sections then have to be read.
Figure K.1 depicts all sections that have to be read this way before §12.1 can be read. In the figures, a dashed “double” arrow ⇓ from a first section – always printed in bold and underlined – to a set of sections contained within a dashed square or polytope indicates that the first section should be read before any of the other sections can be read.
In this chapter we return to general decision under uncertainty, with event-contingent prospects. As in Chapter 1, we assume that prospects map events to outcomes, as in E1x1 … Enxn. Superscripts should again be distinguished from powers. EU is linear in probability and utility and, hence, if one of these is known – as was the case in the preceding chapters – then EU analyses are relatively easy. They can then exploit the linearity with respect to the addition of outcome utilities or with respect to the mixing of probabilities. Then the modeling of preferences amounts to solving linear (in)equalities. It was, accordingly, not very difficult to measure probabilities in Chapter 1 and to derive the behavioral foundation there, or to measure utility in Chapter 2 and to derive the behavioral foundation there.
In this chapter, we assume that neither probabilities nor utilities are known. In such cases, results are more difficult to obtain because we can no longer use linear analysis, and we have to solve nonlinear (in)equalities. We now face different parameters and these parameters can interact. This will be the case in all models studied in the rest of this book. This chapter introduces a tradeoff tool for analyzing such cases. This tool was recommended by Pfanzagl (1968 Remark 9.4.5). Roughly speaking, it represents the influence that you have in a decision situation given a move of nature.
Preference foundations of expected utility, supporting the rationality of this theory, became widely known in the 1960s. They gave a big boost to the popularity of expected utility in many fields. Clarifying illustrations of early applications include Keeney & Raiffa (1976 Chs. 7 and 8), McNeil et al. (1978, 1981), Weinstein et al. (1980 Ch. 9), and Winkler (1972 §5.10). After a first, optimistic, period it gradually became understood that there are systematic empirical deviations, and that applications will have to be more complex than first meets the eye. Kahneman & Tversky's (1979) prospect theory was the major paper to disseminate this insight, and to initiate new and more refined nonexpected utility models. In the same way as Bernoulli's (1738) expected utility entailed a departure from objectivity, prospect theory entailed a departure from rationality. Another influential paper to initiate new models was Machina (1982) who, however, argued for a rational status of those new models. Parts II and III of this book are dedicated to descriptive nonexpected utility theories that may depart from rationality.
This appendix uses parts of Wakker (1989a, Ch. 1). Throughout this book we adopt the revealed-preference paradigm: A decision maker chooses the most preferred prospect from a set of available prospects. Then, besides the descriptions of the prospects (that can be general choice options here), the only observable primitives are assumed to be those choices. Utilities, subjective probabilities, and other concepts exclusively obtain their empirical meaning from their implications for the choices made.
In most practical situations there are more than two prospects to choose from. Revealed-preference theory examines when such general choice situations can be represented by preferences – binary choices – after all (Chipman et al. 1971; Houthakker 1950; Mas-Colell, Whinston, & Green 1995 §1.D). Because all of the main text has assumed that optimal choices are fully specified by preferences, this appendix shows how the preference theories in this book are relevant for general decisions.
This appendix assumes a general choice function, defined formally later, as the empirical primitive, rather than binary preference. Binary preference now is the theoretical construct used to model the empirical primitive. Thus, this appendix gives a behavioral foundation for binary preference, justifying its use throughout the book. The elicitation method now does not derive other parameters from binary preference, as this happened in the main text, but it derives binary preference from the choice function. Hence, the term revealed preference has sometimes been used to designate this particular version of the elicitation method, initiated by Samuelson (1938).
The important financial decisions in our life concern large stakes, and then the maximization of expected value may not be reasonable. Most of our decisions concern nonquantitative outcomes such as health states. Then expected value cannot be used because it cannot even be defined. For these reasons, a more general theory is warranted. We now turn to such a theory – expected utility. For simplicity, we consider only the case where probabilities are known in this chapter. This case is called decision under risk. The general case of both unknown probabilities and unknown utility is more complex, and will be dealt with in later chapters. Whereas Chapter 1 showed how to read the minds (beliefs, i.e. subjective probabilities) of people, this chapter will show how to read their hearts (happiness, i.e. utility).
Decision under risk as a special case of decision under uncertainty
Probabilities can be (un)known to many degrees, all covered by the general term uncertainty. Decision under risk is the special, limiting case where probabilities are objectively given, known, and commonly agreed upon. Risk is often treated separately from uncertainty in the literature. It is more efficient, and conceptually more appropriate, to treat risk as a special case of uncertainty. I will discuss this point in some detail in this and the following sections. This point will be especially important in the study of ambiguity in Part III. Machina (2004) provided a formal model supporting this point.
This book has been written and organized especially for readers who do not want to read all of its contents, but want to skip parts and select the material of their own interest. This has been achieved by an organization of exercises explained later, and by an Appendix K that describes the interdependencies between sections. Because of this organization, this book can be used by readers with different backgrounds.
We will examine theories of individual decision making under uncertainty. Many of our decisions are made without complete information about all relevant aspects. This happens for instance if we want to gamble on a horse race and have to decide which horse to bet on, or if we are in a casino and have to decide how to play roulette, if at all. Then we are uncertain about which horse will win or how the roulette wheel will be spun. More serious examples include investments, insurance, the uncertain results of medical treatments, and the next move of your opponent in a conflict. In financial crises, catastrophes can result from the irrational attitudes of individuals and institutions towards risks and uncertainties.
Two central theories in this book are expected utility theory and prospect theory. For all theories considered, we will present ways to empirically test their validity and their properties. In many applications we require more than just qualitative information.
The topic of the first part of this book, expected utility, has been covered by many textbooks and surveys. The focus on measurement and behavioral foundations, as in this book, is primarily found in books from decision analysis (risk theory applied to management science). These include Bunn (1984), Clemen (1991; the first part has much material on modeling actual decision situations and there are many case studies), von Winterfeldt & Edwards (1986), Keeney & Raiffa (1976; Chs. 1–4, with Chs. 5 and 6 focusing on questions of aggregation as in our §3.7; read Raiffa 1968 first), and Raiffa (1968 Chs. 1, 2, 4, 5, 6). Although Luce & Raiffa (1957) is mostly on game theory, its presentation of uncertainty and risk in Chs. 2 and 13 is outstanding. Economic works include the well-written Drèze (1987 first two chapters), Gollier (2001a), Kreps (1988), Machina (1987), and Mas-Colell, Whinston, & Green (1995 Chs. 1, 6). Mathematical works include the valuable and efficient collection of material in Fishburn (1970, 1981), the deep and rich Krantz et al. (1971), and the impressive and mature Pfanzagl (1968). Fishburn (1972) gives a convenient introduction to the mathematics of decision theory. Medical works close to the topic of this book include the accessible Sox et al. (1986) and the deeper and more technical Weinstein et al. (1980), with a broader exposition on cost-effectiveness analyses in Drummond et al. (1987) and Gold et al. (1996). Philosophical works include the impressive Broome (1991).