To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book has presented a number of theories for decision under uncertainty and risk, primarily classical expected utility, rank-dependent utility, and prospect theory. Throughout we have treated risk with given probabilities as a special limiting case of uncertainty. We have strived for homeomorphic models, where the algebraic operations in models resemble psychological processes. To achieve this goal, we developed nonparametric deterministic measurement methods, where model parameters can directly be linked to observable choice, the empirical primitive. Behavioral foundations were derived from consistency requirements for the measurements of the model parameters.
In Part I, on expected utility, there were relatively many exercises and applications because this model is classic and has existed for a long time. The first new concept beyond the classical model was rank dependence, introduced informally in Chapter 5 and developed in the following chapters. Using rank dependence we could define pessimism and optimism which, while formally new, were closely related to the classical risk aversion and risk seeking. We also found a new phenomenon, likelihood insensitivity, the second new concept beyond the classical model. It reflects a lack of understanding of risk and uncertainty rather than an aversion or a preference. No similar phenomenon can be modeled with expected utility. The third new concept was reference dependence, leading to a different treatment of gains than losses, in prospect theory. The fourth new concept was source dependence, with ambiguity referring to differences between sources with unknown probabilities versus chance (known probabilities).
This book is the culmination of 14 years of teaching. In the 15th year, when for the first time I did not feel like rereading or rewriting, the time had come to publish it. The book received helpful comments from Han Bleichrodt, Arie de Wild, Itzhak Gilboa, Glenn Harrison, Amit Kothiyal, Gijs van de Kuilen, Georg Weizsäcker, and many students during the past 14 years. Thorough comments from Rich Gonzalez and Vitalie Spinu are especially acknowledged. I am most indebted to Stefan Trautmann for the numerous improvements he suggested. This book has also benefited from many inspiring discussions with Craig Fox, with whom I share the privilege of having collaborated with Amos Tversky on uncertainty during the last years of his life.
This chapter presents rank-dependent utility for uncertainty as a natural generalization of the same theory for risk. Much of the material in this chapter can be obtained from Chapters 7 and 8 (on rank dependence under risk) by using a word processor to search for “probability p” and by then replacing it with “event E” everywhere. Similarly, much of this chapter can be obtained from Chapter 4 (on EU under uncertainty) by searching for “outcome event E” and replacing it with “ranked event ER,” and “subjective probabilities of events” with “decision weights of ranked events.” Most of this chapter should, accordingly, not be surprising. I hope that the readers will take this absence of a surprise as a surprise in a didactic sense. A good understanding of the material of §2.1–§2.3, relating risk to uncertainty, will facilitate the study of this chapter.
The literature on rank dependence for uncertainty has commonly used a “comonotonicity” concept that, however, is not very tractable for applications. Hence, our analysis will use ranks instead, in the same way as we did for risk. Comonotonicity is analyzed in Appendix 10.12.
Probabilistic sophistication
The assumption of expected utility maximization has traditionally been divided into two assumptions. The first one entails that all uncertainties can be quantified in terms of probabilities, so that event-contingent prospects can be replaced by probability-contingent prospects (Cohen, Jaffray, & Said 1987, Introduction; Savage 1954 Theorem 5.2.2).
If you do not know how to solve an exercise, then it is not yet time to inspect the elaborations provided here. It is better then to restudy the preceding theory so as to find out what is missing in your knowledge. Such searches for lacks in knowledge comprise the most fruitful part of learning new ideas.
Exercise 1.1.1.
(a) A: {Bil}; B: {Bill, no-one}; C: {Bill, Jane, Kate}; D: {Jane, Kate}; E: {Jane, Kate}. Note that D = E.
(b) x: (Bill: n, Jane: α, Kate: α, no-one: α); y: (Bill: n, Jane: n, Kate: α, no-one: n); z: (Bill: α, Jane: n, Kate: n, no-one: n).
(c) 24 = 16, being the number of ways to assign either an apple or nothing to each element of S.
(d) Two exist, being α and n. It is allowed to denote constant prospects just by their outcome, as we did. We can also write them as (Bill: α, Jane: α, Kate: α, no-one: α) and (Bill: n, Jane: n, Kate: n, no-one: n). □
Exercise 1.1.2. The anwer is no. Because only one state of nature is true, s1 and s2 cannot both happen, and s1∩s2 = Ø. Indeed, it is not possible that both horses win. P(s1∩s2) = 0 ≠ 1/8 = P(s1) × P(s2). Stochastic independence is typically interesting for repeated observations. Decision theory as in this book focuses on single decisions, where a true state of nature obtains only one time. The horse race takes place only once.
Tversky & Kahneman (1992) corrected the theoretical problem of probability weighting of Kahneman & Tversky's (1979) original prospect theory. Thus, the theory could be extended to general probability-contingent prospects and this was the topic of Chapter 9. A more important advance of Tversky & Kahneman (1992) was that it extended prospect theory from risk to uncertainty. A common misunderstanding in decision theory today is that prospect theory supposedly concerns only risk, and that other models should be used for ambiguity. This chapter shows that prospect theory is well suited for analyzing uncertainty and ambiguity.
Prospect theory for risk generalizes rank dependence by adding reference dependence, with risk attitudes for losses different from those for gains. Although there have not been many studies into ambiguity with losses, the existing evidence suggests that ambiguity attitudes for losses deviate much from those for gains, with ambiguity seeking rather than ambiguity aversion prevailing for losses (references in §12.7). Hence, for the study of ambiguity, the reference dependence of prospect theory is highly desirable.
All models presented so far were special cases of the model of this chapter, and all preceding chapters have prepared for this chapter. All the ingredients have, accordingly, been developed by now. Hence, all that remains to be done is to put these ingredients together. This chapter on the most important model of this book will accordingly be brief, and will only add some theoretical observations.
Given the many applications of expected utility for decision under risk, we dedicate a separate chapter to this topic. Throughout this chapter we make the following assumption, often without further mention. It implies Structural Assumption 2.5.2 (decision under risk and richness), adding the assumption of EU.
Structural Assumption 3.0.1 [Decision under risk and EU]. ≽ is a preference relation over the set of all (probability-contingent) prospects, which is the set of all finite probability distributions over the outcome set ℝ. Expected utility holds with a utility function U that is continuous and strictly increasing. □
The assumption that all finite probability distributions are available in the preference domain entails, in fact, a strong richness restriction, similar to our assumption that all real-valued outcomes are available in the domain. The assumption is, however, commonly made in the literature on decision under risk and it facilitates the analysis, which is why we use it too.
An application from the health domain: decision tree analysis
My experience with applications of decision theory mostly come from the medical domain. Although discussions of medical examples can at times be depressing, dealing with human suffering, the medical domain is one of the most important fields of application for decision theory. Hence, I present a medical application. We consider a simplified decision analysis for patients with laryngeal cancer in stage T3 (a particular medical state of the cancer with no metastases; McNeil et al.1981). In this subsection, as an exception, outcomes are nonmonetary.
Since Keynes (1921), Knight (1921), and Ellsberg (1961) it had been understood that we need a theory for decision under uncertainty when no probabilities are given. A first proposal came from de Finetti (1931a) and Ramsey (1931), and was later perfected by Savage (1954). These authors showed that, if no objective probabilities are given, then we have to provide subjective probabilities as well as we can, assuming some conditions. This led to expected utility for uncertainty, the topic of Chapter 4. Because we still had probabilities available, we could use many techniques from risk. No very new techniques had to be developed.
The results of de Finetti, Ramsey, and Savage were first challenged by Allais (1953a), who showed that people often do not maximize expected utility. Allais did not challenge the role of probabilities (the concern of Keynes and Knight), and assumed those given. A more serious challenge came from Ellsberg (1961). He provided a paradox where, again, the conditions of de Finetti et al. were violated. These violations were, however, more fundamental. They showed that under plausible circumstances no subjective probabilities can be provided in any manner. Thus Ellsberg put the question of Keynes and Knight back on the table: We need a new theory for decision under uncertainty, one that essentially extends beyond probabilistic reasoning. Yet, despite the importance of such a theory, for more than 60 years after Keynes (1921) and Knight (1921) no one had been able to invent it due to the subtle nature of uncertainty in the absence of probabilities.
This chapter analyzes phenomena under uncertainty that did not show up under risk.§11.1 discusses how the Ellsberg paradox (Example 10.3.1) reveals the main new phenomenon. There are not many tractable models available in the literature to analyze ambiguity empirically. §11.2 proposes some special cases of RDU that can serve this purpose. Analyses in terms of risk and uncertainty premiums are in §11.3. Before turning to pragmatic ways of measuring ambiguity, §11.4–11.6 present some models alternative to RDU, so that these can also be discussed in the pragmatic analysis (although knowledge of these is not necessary for what follows). Besides source preference, which is related to optimism and elevation of the weighting curve (the motivational component in Figures 7.1.1–7.1.3), likelihood sensitivity is another important component of uncertainty attitudes, also depending on sources (the cognitive component in Figures 7.1.1–7.1.3). Tversky & Fox (1995) and Tversky & Wakker (1995) provided theoretical and empirical analyses of this condition for ambiguity. The pragmatic measurements of ambiguity aversion and likelihood sensitivity are presented in §11.7–11.8. Three appendices follow.
The Ellsberg paradox and the home bias as within-subject between-source comparisons
Sources of uncertainty will play a central role in this chapter. These are sets of uncertain events generated by the same mechanism. Further details will be provided later. The main new phenomenon that can be inferred from the Ellsberg paradox and that will be the topic of this chapter concerns different attitudes within the same person between different sources of uncertainty.
This appendix discusses the case where models do not fit data perfectly well and we nevertheless try to get by as well as we can (Gilboa 2009 §7.1). It is the case almost exclusively met in descriptive applications. We will use a simple least-squares criterion to fit data.
Nonparametric measurements and parametric fittings for imperfect models: general discussion
This section presents a general discussion with methodological considerations. It can be skipped by readers who only want to use techniques for fitting data. In most descriptive applications, the model we use does not describe the empirical reality perfectly well and the preference conditions of our model are violated to some extent. One reason is that there usually is randomness and noise in the data. Another reason is that there may be systematic deviations. We then nevertheless continue to use our model if no more realistic and tractable model is available. We then try to determine the parameters of our model that best fit the data, for instance by minimizing a distance, such as a sum of squared differences, between the predictions of the model and the actual data. Alternatively, we may add a probabilistic error theory to the deterministic decision model (called the core model) and determine the parameters that maximize the likelihood of the data.
Expected utility, rank-dependent utility, and prospect theory all use generalized weighted averages of utilities for evaluating prospects. We have used tradeoff consistency conditions to provide measurements and behavioral foundations for such models. Alternative conditions, based on a bisymmetry condition, have been used in the literature to obtain behavioral foundations. These conditions use certainty equivalents of prospects, so that a richness assumption must be added that certainty equivalents always exist. To avoid details concerning null events, we will assume that S is finite and that all states are nonnull. The latter is implied by strong monotonicity in the following assumption.
Structural Assumption E.1. Structural Assumption 1.2.1 (decision under uncertainty) holds with S finite. Further, ≽ is a monotonic and strongly monotonic weak order, and for each prospect a certainty equivalent exists. □
Although the following multisymmetry condition is a static preference condition, it is best explained by thought experiments using multistage uncertainty. Consider Figure E.1, where we use backward induction (Appendix C) to evaluate the prospects. The indifference sign ∼ indicates that backward induction generates the same certainty equivalent for both two-multistage prospects.
How should one choose the best restaurant to eat in? Can one really make money at gambling? Or predict the future? Naive Decision Making presents the mathematical basis for making decisions where the outcome may be uncertain or the interests of others have to taken into consideration. Professor Körner takes the reader on an enjoyable journey through many aspects of mathematical decision making, with pithy observations, anecdotes and quotations. Topics include probability, statistics, Arrow's theorem, Game Theory and Nash equilibrium. Readers will also gain a great deal of insight into mathematics in general and the role it can play within society. Intended for those with elementary calculus, this book is ideal as a supplementary text for undergraduate courses in probability, game theory and decision making. Engaging and intriguing, it will also appeal to all those of a mathematical mind. To aid understanding, many exercises are included, with solutions available online.
This book discusses the well known fallacies of behavioural decision theory. It shows that while an investigator is studying a well-known fallacy, he or she may introduce, without realizing it, one of the simple biases that are found in quantifying judgements. The work covers such fallacies as the apparent overconfidence that people show when they judge the probability of correctness of their answers to two-choice general knowledge questions using a one-sided rating scale; the apparent overconfidence in setting uncertainty bounds on unknown quantities when using the fractile method; the interactions between hindsight and memory; the belief that small samples are as reliable and as representative as large samples;; the regression fallacy in prediction; the availability and simulation fallacies; the anchoring and adjustment biases; and bias by frames. The aim of this book is to help readers to learn about the fallacies and thus to avoid them. As such, it will be useful reading for students and researchers in probability theory, statistics and psychology.