To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Hindsight changes the relative importance of the influences that are judged to be responsible for an event in directions that make them more compatible with the known outcome. Knowing the first result of a novel experiment makes identical subsequent results appear more likely, and so belittles the outcome of the investigation.
The retroactive interference produced by hindsight can bias the memory of a forecast. But the response contraction bias can have as great or a greater influence on the reported memory than does hindsight. There is also a reverse effect that corresponds to proactive interference: memory of a forecast can reduce hindsight bias. The reduction can be produced indirectly by asking for a reason at the time of the forecast to strengthen the memory of the forecast. The reduction can also be produced directly at the time of recall by asking for a reason, or simply by giving a warning that a reason will have to be given. The writing of history, forecasting, and replicating experimental results, are all likely to be affected by hindsight.
Relations between foresight and hindsight
The flow chart of Figure 4.1 shows the time sequence in a full investigation of hindsight bias. A forecast is made before an event occurs. The forecaster remembers part or all of the forecast. Then comes the outcome, which produces hindsight. The arrows and paths at the sides of the figure illustrate ways in which hindsight and foresight are related to each other.
Tversky and Kahneman describe some of their investigations where practical decision makers could use heuristics when they work in their own fields of expertise. But most investigations present laboratory type problems on probability to ordinary students. Tversky and Kahneman attribute the errors that the students make to their use of heuristics. But there are many other reasons for errors: failing to know the appropriate normative rules or failing to use them; problem questions that are too difficult for ordinary students to answer correctly; questions with misleading contexts; and the simple biases that occur in quantifying judgments. One student may fail for one reason whereas other students fail for other reasons. Training reduces the errors. Some of the original investigations now require changes of interpretation. One investigation shows how results that violate the representativeness heuristic can be accounted for by chanaging the dimension along which representativeness is judged.
Heuristics as substitutes for normative rules
During the 14 years between 1969 and 1983, Tversky and Kahneman publish numerous articles that introduce and develop their novel theory of the use of heuristic biases or rules of thumb in dealing with probabilities. In this book the biases are discussed in separate chapters. They are listed in Table 1.1, together with 2 additional biases described by Lichtenstein and Fischhoff or by Fischhoff, apparent overconfidence and hindsight bias. This work has had a major influence on theories of judgment and decision making.
Of the investigations described in this book, 19 can be said to involve questions that are too difficult for the respondents. Four of these questions are impossible for anyone to answer properly. Another 7 questions involve the use of percentages, or statistical techniques that many ordinary respondents cannot be expected to know. Five more questions require other relevant knowledge that most ordinary respondents lack, or in one case cannot be expected to think of. The remaining 3 of the 19 questions either require more time than is permitted, or involve complex mental arithmetic. There are also 9 additional questions with misleading contexts.
Questions too difficult for the respondents
Table 14.1 lists 19 examples where biased performance appears to be designed into an investigation by the use of problem questions. The questions are too difficult for the respondents, who are usually ordinary students. Some questions are virtually impossible for anyone to answer properly. The table is divided into 6 parts, according to the source of the difficulty.
Impossible tasks
Part 1 of Table 14.1 lists 4 questions that are impossible to answer satisfactorily. Questions 1.1, 1.2 and 1.3 (Fischhoff and Slovic, 1980, Experiments 1,3,5 and 6; see Chapter 3, Impossible perceptual tasks) are problems because they can only be answered correctly by responding with a probability of .5 all the time. Paid volunteers with unspecified backgrounds are given 2-choice tasks that are impossible to perform better than chance.
In 28 of the investigations described in this book, a simple bias of quantification accompanies a complex bias in dealing with probabilities. In 11 of these investigations the simple bias can account for the full effect attributed to the complex bias. In another 5 investigations the simple bias can have a greater or as great an effect as the complex bias. In 3 more investigations the simple bias can either increase or reduce the effect of a complex bias. In the other 9 investigations the simple bias can account for an incidental effect that is found while investigating a complex bias.
Investigations with both simple and complex biases
Table 13.1 lists 28 investigations that involve both a simple bias in quantifying judgments and a complex bias in estimating probabilities. Column 3 of the table shows that the simple response contraction bias accounts for 18, or 64%, of the investigations. Transfer bias and the equal frequency bias account for another 22% and 11% respectively. The 3 most commonly associated complex biases are apparent overconfidence or underconfidence, 36%, the availability and simulation fallacies, 25%, and the expected utility fallacy, 11%. However, the exact proportions of both the simple and complex biases depend on the particular investigations described in this book.
In discussing investigations with both simple and complex biases, investigators characteristically concentrate on the complex bias that they are studying. They pay little or no attention to the accompanying simple bias.
The availability and simulation heuristics are used when people do not know the frequency or probability of instances in the outside world, and so cannot follow the normative rule of using objective measures. Instead they judge frequency or probability by assembling the stored information that is available in memory. This can lead to the availability fallacy when what is retrieved from memory is biased by familiarity, by the effectiveness with which memory can be searched, by misleading unrepresentative information, or by imaginability.
The simulation fallacy is the name given to an erroneous estimate of frequency or probability that is obtained by the heuristic of imagining or constructing instances, instead of by recalling instances. The availability fallacy may be responsible for some of the commonly reported associations between the Draw a Person test and the symptoms reported by patients.
Judged frequency or probability depends on subjective availability or ease of simulation
Tversky and Kahneman (1973, pp. 208–9) distinguish between availability in the outside world, which can be called objective availability, and availability in a person's stored experience or memory, which can be called subjective availability. Subjective availability is based on the frequency or strength in memory of associative bonds. The normative rule is to use objective measures of availability in the outside world. But when objective measures are not available, people have to adopt Tversky and Kahneman's availability heuristic of judging frequency or probability in the outside world using subjective availability.
When people have an obvious anchor, they may estimate probabilities using Tversky and Kahneman's (1974, p. 1128) anchoring and adjustment heuristic, instead of following the normative rule of avoiding the use of anchors. They play safe and select a response that lies too close to the anchor or reference magnitude. There are 2 versions of the heuristic that correspond to 2 simple biases: the response contraction bias that is discussed in previous chapters, and the sequential contraction bias.
The sequential contraction bias version occurs in making a final estimate from an initial reference magnitude. It may be possible to avoid the sequential contraction bias by using only judgments of stimuli following an identical stimulus, by counterbalancing ascending and descending responses and averaging, or by asking for only one judgment from each person, starting from a previously determined unbiased reference magnitude. The sequential contraction bias can be used successfully in selling to and buying from unwary customers. It can bias the comparative evaluation of a number of candidates or rival pieces of equipment. It can lead to unwarranted optimism in planning, and in evaluating the reliability of a complex system. It can be responsible for the conservatism in revising probability estimates.
Response contraction bias and sequential contraction bias versions
When anchors are available, Tversky and Kahneman (1974, p. 1128) describe how people use the anchoring and adjustment heuristic. They do so instead of following the normative rule of avoiding the use of anchors.
Subjective probabilities show 2 kinds of bias. First, ordinary people know little or nothing about probability theory and statistics. Thus, their estimates of probability are likely to be biased in a number of ways. The second kind of bias occurs in investigating the first kind. The investigators themselves may introduce biases. Reviews of the literature characteristically accept the results of the investigations at their face value, without attempting to uncover and describe the biases that are introduced by the investigators.
The book treats both kinds of bias, covering both theory and practice. It concentrates on Tversky and Kahneman's pioneer investigations (Kahneman and Tversky, 1984) and the related investigations of their likeminded colleagues. It describes how they interpret their results using their heuristic biases and normative rules. It also touches on possible alternative normative rules that are derived from alternative statistical or psychological arguments.
The book should provide useful background reading for anyone who has to deal with uncertain evidence, both in schools of business administration and afterwards in the real world. It should be sufficiently straightforward for the uninitiated reader who is not familiar with behavioral decision theory. It should be useful to the more sophisticated reader who is familiar with Kahneman, Slovic and Tversky's (1982) edited volume, Judgment under uncertainty: Heuristics and biases, and who wants to know how the chapters can be linked together and where they should now be updated.
When there are 2 independent probabilities of the same event, the base rate heuristic is to ignore or give less weight to the prior probability or base rate than to the likelihood probability. Tversky and Kahneman's normative rule is to combine the 2 independent probabilities, using the Bayes method.
Both the examination problem and the cab problem show that the base rate is less neglected when it is seen to be causal than when it is merely statistical. In the professions problem, neglect of the base rate is reduced by emphasizing the base rate and by reducing the emphasis on the likelihood probability. In both the cab problem and the professions problem, complete neglect of the base rate can be produced by committing a common logical fallacy that is unrelated to the base rate. The medical diagnosis problem shows that the neglect of the base rate can produce many false positives. The neglect can be greatly reduced by avoiding the use of undefined statistical terms, percentages, chance, and time stress if present.
Neglect of the base rate
A base rate describes the distribution of a characteristic in a population. An example of a causal base rate is the pass rate of an examination. The base rate is causal because it depends on the difficulty of the examination. An example of a noncausal or statistical base rate is the selected proportion of candidates in a sample who pass or fail an examination.
The regression fallacy occurs with repeated measures. The normative rule is that when a past average score happens to lie well above or below the true average, future scores will regrees towards the average. Kahneman and Tversky attribute the regression fallacy to the heuristic that future scores should be maximally representative of past scores, and so should not regress. Suppose students are accustomed to seeing individuals vary from time to time on some measure. If so, the majority are likely to recognize regression in individuals on this measure when they are alerted to the possibility. Regression in group scores reduces the correlation between the scores obtained on separate occasions. Taking account of regression in predicting a number of individual scores reduces the accuracy of the predictions of the group scores by reducing the width of the distribution.
To avoid the regression fallacy, people need to have it explained to them. They should beware of the spurious ad hoc explanations that are often proposed to account for regression. Examples are given of regression after an exploratory investigation, during a road safety campaign, following reward and punishment, and during medical treatment for a chronic illness that improves and deteriorates unpredictably.
Regression towards the average
Regression can occur whenever quantities vary randomly. The normative rule is that future scores regress towards the average. Kahneman and Tversky (1973, p. 250) suggest that the heuristic bias is based on the belief that future scores should be maximally representative of past scores, and so should not regress.
The conjunction fallacy or conjunction effect involves judging a compound event to be more probable than one of its 2 component events. Tversky and Kahneman like to attribute the conjunction fallacy to their heuristic of judgment by similarity or representativeness, instead of by probability. However, the fallacy can result from averaging the probabilities or their ranks, instead of using the normative rule of multiplying the probabilities. The conjunction fallacy can also result from failing to detect a binary sequence hidden in a longer sequence, and from failing to invert the conventional probability.
The causal conjunction fallacy occurs in forecasting and in courts of law. The fallacy involves judging an event to be more probable when it is combined with a plausible cause. The fallacy can be produced by judging p(event/cause) instead of p(event & cause). In theory, the conjunction fallacy or conjunction effect may be an appropriate response. This can happen when one of a number of possible alternatives is known to be correct, and the student has to discover which alternative it is by combining conjunctive evidence, using the Bayes method. Versions of both the conjunction fallacy and the dual conjunction fallacy can be accounted for in this way, although it is very unlikely that untrained students would know any of this.
The incidence of the conjunction fallacy can be reduced by asking for a direct comparison between the 2 key alternatives, instead of asking for all the alternatives to be ranked by probability; by changing the question from ranking probabilities or asking for percentages to asking for numbers out of 100; by calling attention to a causal conjunction; and by training in probability theory and statistics.
Probabilities differ from numbers in that they have a limited range of from 0.0 through 1.0. Probability theory and statistics were developed between World Wars I and II in order to model events in the outside world. More recently, the models have been used to describe the way people think, or ought to think. Both frequentists and some subjectivists argue that the average degree of confidence in single events is not comparable to the average judged frequency of events in a long run. The discrepancies between the 2 measures can be ascribed to biases in the different ways of responding.
Tversky and Kahneman, and their likeminded colleagues, describe a number of what can be called heuristic or complex biases that influence the way people deal with probabilities: apparent overconfidence, hindsight bias, the small sample fallacy, the conjunction fallacy, the regression fallacy, base rate neglect, the availability and simulation fallacies, the anchoring and adjustment biases, the expected utility fallacy, and bias by frames. Each complex bias can be described by a heuristic or rule of thumb that can be said to be used instead of the appropriate normative rule. Some version of the heuristic of representativeness is used most frequently.
The normative theory of expected utility predicts that people should choose the largest expected gain or smallest expected loss. In choosing between 2 options with the same expected gain or loss, people should choose each option about equally offen. People who do not follow the normative theory can be said to commit the expected utility fallacy. Most people commit the fallacy when choosing between 2 options with the same moderate expected gain or loss. They prefer a high probability gain but a low probability loss. With very low probabilities the preferences reverse. Most people buy a lottery ticket that offers a very low probability gain. They buy insurance to avoid a very low probability loss. Also following the response contraction bias, large gains and losses are undervalued when compared with small gains or losses. Losses matter more than gains.
Prospect theory describes the choices that people actually make. The theory accounts for the preferences just described by postulating a non-linear response contraction bias to convert probabilities to decision weights, and another nonlinear response contraction bias to convert gains and losses to positive and negative subjective values. Multiplying the subjective values by the decision weights produces prospects that should match the preferences.
However, neither expected utility theory nor prospect theory will account for all the majority choices when gains and losses are intermixed in the same investigation. Pricing options can reverse the preferences between pairs of options, owing to the sequential contraction bias that pricing encourages.
Tversky and Kahneman attribute the small sample fallacy to the heuristic that small and large samples should be equally representative. The normative rule is that small samples are not as representative as are large samples. Today not many people with a scientific training probably believe in what can be called the small sample fallacy for size or reliability of a sample, that samples of all sizes should be equally reliable. They are more likely to believe in what can be called the small sample fallacy for distributions, that small and large sample distributions should be equally regular. Small samples that appear too regular are judged to be less probable than are less regular samples. The gambler's fallacy is another small sample fallacy for distributions.
Investigators can reduce the number of students who believe in the small sample fallacy by demonstrating that small heterogeneous samples are often unrepresentative. A practical example of the small sample fallacy for size or reliability is the belief that an unexpected result in the behavioral sciences can be successfully replicated with a reduced size of sample.
Small samples assumed to be representative
People who commit the small sample fallacy can be said to assume that a small random sample should be as reliable as, and as regular as, a large random sample, but not too regular. Tversky and Kahneman (1971, p. 106; Kahneman and Tversky, 1972b, p. 435) attribute the small sample fallacy to the heuristic of representativeness.