To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Sludge is one of the most important yet underappreciated problems in modern society. Examples of sludge include unnecessarily complex paperwork requirements, hard-to-navigate documents and websites, long waiting time, and unfriendly or confusing staff interactions. However, little is known about whether some people are more vulnerable to and less accepting of some types of sludge than others. Drawing on data from a nationally representative survey with 1,591 participants from Ireland, we show that people report being particularly vulnerable to outdated websites with broken links, unfriendly staff interactions, complex documents laden with jargon, and hard-to-navigate websites. These are also the types of sludge that are least acceptable. Less vulnerability is reported to long waiting times and requirements about having to provide private information. We find only minor differences in sludge perceptions depending on whether the sludge emerges in the public or the private sector. Moreover, people with poor mental health report being more vulnerable to and less accepting of sludge. Self-reported administrative literacy is related to less reported vulnerability, and the tendency to procrastinate and a lack of time and mental energy predict more reported vulnerability to sludge. Administrative literacy and a lack of mental energy also predict acceptability of sludge.
New technologies are offering companies, politicians, and others unprecedented opportunity to manipulate us. Sometimes we are given the illusion of power - of freedom - through choice, yet the game is rigged, pushing us in specific directions that lead to less wealth, worse health, and weaker democracy. In, Manipulation, nudge theory pioneer and New York Times bestselling author, Cass Sunstein, offers a new definition of manipulation for the digital age, explains why it is wrong; and shows what we can do about it. He reveals how manipulation compromises freedom and personal agency, while threatening to reduce our well-being; he explains the difference between manipulation and unobjectionable forms of influence, including 'nudges'; and he lifts the lid on online manipulation and manipulation by artificial intelligence, algorithms, and generative AI, as well as threats posed by deepfakes, social media, and 'dark patterns,' which can trick people into giving up time and money. Drawing on decades of groundbreaking research in behavioral science, this landmark book outlines steps we can take to counteract manipulation in our daily lives and offers guidance to protect consumers, investors, and workers.
It is time to recognize a moral right not to be manipulayed. At the same time, the creation of a legal right not to be manipulated raises hard questions, in part because of definitional challenges; there is a serious risk of vagueness and a serious risk of overbreadth. It is probably best to start by prohibiting particular practices – the most egregious forms of manipulators. The basic goal should be to build on the claim that in certain cases, manipulation is a form of theft; the law should forbid theft, whether it occurs through force, lies, or manipulation. Some manipulators are thieves. Examples include hidden terms and automatic enrollment in programs that take people’s money and time.
People buy some goods that they do not enjoy and wish did not exist. They might even be willing to pay a great deal for such goods, whether the currency involves time, commitment or money. One reason involves signaling to others; so long as the good exists, nonconsumption might give an unwanted signal to friends or colleagues. Another reason involves self-signaling; so long as the good exists, nonconsumption might give an unwanted signal to an agent about himself or herself. Yet another reason involves a combination of network effects and status competition; nonconsumption might deprive people of the benefits of participating in a network and thus cause them to lose relative position. Legal responses here, combating a form of manipulation, might be contemplated when someone successfully maneuvers people into a situation in which they are incentivized to act against their interests, by consuming a product or engaging in an activity they do not enjoy, in order to avoid offering an unwanted signal. Prohibitions on waiving certain rights might be justified in this way; some restrictions on uses of social media, especially by young people, might be similarly justified.
“Choice Engines,” powered by Artificial Intelligence (AI) and authorized or required by law, might produce significant increases in human welfare. A key reason is that they can simultaneously (1) preserve autonomy and (2) help consumers to overcome inadequate information and behavioral biases, which can produce internalities, understood as costs that people impose on their future selves. Importantly, AI-powered Choice Engines might also take account of externalities, and they might nudge or require consumers to do so as well. Nonetheless, AI-powered Choice Engines might show behavioral biases. It is also important to emphasize that AI-powered Choice Engines might be enlisted by insufficiently informed or self-interested actors, who might exploit inadequate information or behavioral biases, and thus reduce consumer welfare. AI-powered Choice Engines might also be deceptive or manipulative, and legal safeguards are necessary to reduce the relevant risks.
Manipulation is wrong for two reasons. On Kantian grounds, manipulation, lies, and paternalistic coercion are moral wrongs, and for similar reasons; they deprive people of agency, insult their dignity, and fail to respect personal autonomy. On welfarist grounds, manipulation, lies, and paternalistic coercion share a different characteristic; they displace the choices of those whose lives are directly at stake, and who are likely to have epistemic advantages, with the choices of outsiders, who are likely to lack critical information. Kantians and welfarists should be prepared to agree that manipulation is wrong, though on very different grounds.
New technologies manipulate people, sometimes by producing images or videos that seem real but that are not. Deepfakes are an example. They may be deceptive; they are certainly manipulative. Strong steps should be taken against deepfakes that manipulate people in ways that cause economic or political harm. Deepfakes are a clear and present danger to individual autonomy, to the integrity of reputations, to commerce, to politics, and to democracy.
Choice architecture should not, and need not, compromise either dignity or self-government, though imaginable forms could do both. Some nudges are objectionable because the choice architect has illicit ends. When the ends are legitimate, and when nudges are fully transparent and subject to public scrutiny, a convincing ethical objection is less likely to be available. There is, however, room for ethical objections in the case of well-motivated but manipulative interventions, certainly if people have not consented to them; such nudges can undermine autonomy and dignity. Most nudges are not manipulative, but some of them cross the line.
What is manipulation? There is no single definition; manipulation comes in many varieties. But consider this understanding: Manipulation is a form of influence that does not respect its victim’s capacity for reflective and deliberative choice. That understanding captures many cases of manipulation, and it helps to distinguish manipulation from coercion and deception. Many cases of manipulation involve trickery or covert influence, which also fail to respect people’s capacity for reflective and deliberative choice.
Some of the worse cases of manipulation involve trickery -- covert or hidden influences on people’s choices, compromising their ability to make reflective and deliberative choices. Deepfakes might well be manipulative; often they are. Manipulators use sludge to their benefit; they put friction in our way and hide important features of the situation. When people buy goods that they wish did not exist, they lose time and money. Their lives may go much worse. Their freedom is at risk as well. Manipulation can be a form of theft. Social norms should stand against it. In the most egregious cases, so should law.
Consumers, employees, students and others are often subjected to ‘sludge’: excessive or unjustified frictions, such as paperwork burdens, that cost time or money; that may make life difficult to navigate; that may be frustrating, stigmatizing or humiliating; and that might end up depriving people of access to important goods, opportunities and services. Sludge is a form of manipulation. Because of behavioral biases and cognitive scarcity, sludge can have much more harmful effects than private and public institutions anticipate. To protect consumers, investors, employees and others, firms and private and public institutions should regularly conduct Sludge Audits to catalogue the costs of sludge and to decide when and how to reduce it. Sludge often has costs far in excess of benefits, and it can hurt the most vulnerable members of society.