To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In 2015, an online video went viral showing a self-parking car plow into a group of on-lookers. The car, a Volvo XC60, was not malfunctioning. Instead, the driver mistakenly believed that the car was equipped to detect pedestrians. But, as it turned out, the pedestrian-detection feature is not standard for this self-parking car. It is an add-on, a “luxury upgrade” that had not been purchased by the owner. Fortunately, no one was seriously hurt.
In light of reported accidents like this (including one in which a pedestrian was killed), regulators and technophiles have begun asking, Are We Programming Killer Cars? While it seems straightforward that self-driving cars should be designed to avoid killing people and animals, it is far from straightforward how this would be done.
In his book Calculated Risks, decision-making expert Gerd Gigerenzer reports the case of a doctor who convinced ninety “high-risk” women without cancer to sacrifice their breasts “in a heroic exchange for the certainty of saving their lives and protecting their loved ones from suffering and loss.” But as Gigerenzer points out, if the doctor had done the calculations correctly, he would have found that the vast majority of these women (eighty-four out of ninety, to be precise) were not expected to develop breast cancer at all.
Was this an isolated case of poor reasoning on the part of a single doctor? Unfortunately, the answer is no, as is plainly apparent in the ongoing controversies surrounding breast cancer and prostate cancer screening.
According to the National Institutes of Mental Health, major depression is one of the most common mental disorders in the United States, affecting over seventeen million Americans. This is a serious public health issue because depression can severely impair a person’s ability to carry out major life activities. The prevalence of depression is nearly twice as high in females as it is in males, and the rate of depression is highest among young people aged 18–25. An estimated 65 percent of people who have been diagnosed with depression are treated with a combination of psychotherapy and medication, with mixed success (www.nimh.nih.gov).
Dr. X has noticed that patients in his practice who are depressed also have lower levels of a certain chemical in their blood. He develops a product that raises the level of that chemical, and sure enough his patients claim they feel much better. On the basis of this, he writes a book touting the importance of boosting levels of that chemical, goes on talk shows, and launches a self-help website in which he offers his product for sale.
In recent years, game theory has been combined with artificial intelligence to discover more efficient and more effective ways of treating a wide variety of real-world problems.
Problems differ in terms of complexity of their structures as well as complexity of their solutions. If we’re lucky, the problem we’re grappling with is one that is well defined.
Practical and Theoretical Reasoning. Suppose there is a movie playing at a theater that you really want to see. But it is too far to walk there, and you don’t have a car.
Bayes Rule is the dominant decision-making framework across a number of disciplines, including medical science, economics, psychology, and physics. It is a powerful tool for making decisions under conditions of uncertainty. The point of Bayesian decision-making is to ground our beliefs in actual facts and data. We have a belief, new information comes along, and we update our belief based on that information. That is the way that we ensure we have a head full of true beliefs that are grounded in evidence rather than prejudice.
We are always developing and updating beliefs from a place of uncertainty, which means that most of our beliefs are probabilistic. For example, the weather report says that the chance it will rain today is low, so that is what we believe. But then we check the sky, see rainclouds, and update our belief based on that observation. Now we believe there is a high probability that it will rain today. We updated our belief based on objective evidence. We behaved rationally.
The 2016 US presidential race was one of the most contentious in recent history for one very important reason – the reliance on ad hominem attacks by both candidates to bolster their bids for office.
In the annals of financial history, the year 2008 stands out like a tarantula on white bread. That was the year the banking industry faced its worst crisis since the Great Depression. Unprecedented rises in real-estate prices during the previous decade seduced bankers into making riskier and riskier mortgage loans. When the housing bubble burst, so did their mortgage portfolios. The banking behemoth Lehman Brothers went bankrupt. Others, such as Merrill Lynch and AIG, came within a hair’s breadth of failing as well until the federal government stepped in to rescue banks deemed “too big to fail.”
The date was June 22, 2010. It was the final round in the British game show called Golden Balls. The two contestants, Stephen and Sarah, faced each other across a table, as anxious as cats on a hot tin roof. The people in the audience were collectively holding their breath.
At stake was £100,000 (about $150,000). The two contestants each had two golden balls in front of them. Inside one was the word Split. Inside the other was the word Steal. If both chose the Split ball, they each went home with £50,000. If one chose Split and the other chose Steal, the one who chose Steal got all of the money and the one who chose Split went home with nothing. If they both chose Steal, both of them went home with nothing.