To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Privacy is gravely endangered in the digital age, and we, the digital citizens, are its principal threat, willingly surrendering it to avail ourselves of new technology, and granting the government and corporations immense power over us. In this highly original work, Firmin DeBrabander begins with this premise and asks how we can ensure and protect our freedom in the absence of privacy. Can—and should—we rally anew to support this institution? Is privacy so important to political liberty after all? DeBrabander makes the case that privacy is a poor foundation for democracy, that it is a relatively new value that has been rarely enjoyed throughout history—but constantly persecuted—and politically and philosophically suspect. The vitality of the public realm, he argues, is far more significant to the health of our democracy, but is equally endangered—and often overlooked—in the digital age.
For the most part, our spies contend that we know we are spied upon, but accept the surveillance because of the many concrete benefits we receive in return. Marketers want us to know, for example, the more we divulge, the better they can serve us. Indeed, they can help us realize desires and aspirations before they occur to us – desires we never even knew we had. In that sense, they promise to empower us, help us get what we want, and improve our personal lives. In turn, this implies that savvy shoppers expose personal details, even if they seem arcane and unremarkable. Apparently, that is not for us to judge.
I have been arguing that privacy is on life support, and its prognosis looks dim. It is thoroughly besieged in the digital age, and the general population is perhaps its greatest enemy, happily surrendering it to indulge in all manner of conveniences and innovations. Critics and privacy advocates warn that this is a dire development; privacy is necessary for a free and fulfilled life. The digital tidal wave forces us to face a future where privacy may be nonexistent, or at least radically transformed, and diminished. I don’t believe the proper solution is to urge people to start caring about privacy again, build stronger walls around their personal lives, so to speak, and block out spying eyes. This seems utterly impossible, and it is unreasonable or implausible to request this of people who are eager to tap into all that the digital economy has to offer. We must find a way to thrive despite this state of affairs.
In Plato’s dialogue “Meno,” the eponymous speaker, after much frustration, declares that Socrates, his insistent interviewer, is like a “broad torpedo fish” – better known to us as an electric ray – which stuns and paralyzes those who approach it.1 The two men had conversed at length on the topic of virtue, something Meno professed to know much about, and he readily produced several definitions. After Socrates debunked each in turn, Meno was at a loss. Like the torpedo fish, he tells Socrates, “you now seem to have had that effect on me, for both my mind and my tongue are numb, and I have no answer to give you. Yet I have made many speeches about virtue before large audiences on a thousand occasions, very good speeches I thought but now I cannot even say what it is!”2
In the face of rapid technological changes transforming our lives in profound ways, the forecast for privacy looks dark. Its demise is hastened by “an unstoppable arms race in communication tools and data mining capabilities, which in turn are both due to the continued progression of Moore’s Law. The cost of keeping secrets increases inversely to decreases in the cost of computing.”1 Roughly, Moore’s Law holds that computing power will grow exponentially; digital devices will become faster and more powerful in rapid succession. Where technological changes may have once developed gradually, that is precisely not the case as in the digital age, where breakneck speed of change is the rule. And digital technology will become cheaper, more accessible, and broadly distributed in the process. Taken together, this means that the cost and hassle of protecting privacy grows exponentially, too. In the digital net that envelops our everyday lives, it will become increasingly more difficult and rare to perform any task without revealing ourselves, and opening our lives to spying eyes. And our spies are not content to watch us from without; they will install sentinels in our very bodies, and monitor us from within.
The current crisis of privacy is, or ought to be, especially surprising in the United States, because privacy concerns, historians and legal scholars attest, were a prime driver in the creation of the nation, and the erection and expansion of our basic freedoms. Our disregard for privacy is surprising for another reason: it defies predictions and expectations of how we are supposed to act under surveillance. Why, if we know we are watched – and we admit as much – is online behavior so shameless, seemingly open and free? Why do so many of us feel compelled to blare intimate details, and share mundane and embarrassing events with the whole world? What does that say about us? Is human nature changing before our very eyes, in the digital age, such that we show no compunction about living an utterly public life, in most all respects? How can we retain any enduring or grudging respect for privacy in this brave new world? Some people muster objections; some admit there is something wrong in privacy invasions – but what? We have a vocabulary of privacy, and a deep historical relationship to it (or so we are told), but hardly know what it means anymore, why it is of value, and worthy of defense. And in the digital age, privacy requires no modest or ordinary defense, but a monumental call to arms, to beat back the tidal wave of surveillance – which we invite, and facilitate.
Before anyone despairs over the demise of privacy, it is helpful to consider the history of this institution, and how it developed into its modern incarnation. In one respect, privacy is a very young value, and humans have long lived – and communities flourished – without it. Privacy has always been embattled. That is nothing new – you might even say that is its native state. When people managed to achieve some degree or form of privacy in the past, it was in much lesser quantities, and far more selectively and rarely enjoyed than advocates and critics say we need and deserve. The amount of privacy we have come to expect or take for granted in contemporary suburban living, by contrast, is almost absurdly generous. It is hard to imagine or conceive of an architecture and landscape that prioritizes privacy better. But appearances are deceiving: on one hand, and as I have been arguing thus far, our lives have never been more transparent within our suburban bubbles. Do we care? Better yet: what does this indicate we value in or about privacy? Does it suggest we esteem privacy at all – or something else altogether?
Many are worried for the fate and future of privacy, and rightly so. It is impossible to get anything done these days without leaving telltale digital trails, which eager spies scoop up. And it turns out you don’t have to divulge much for companies to learn a lot about you. Our digital monitors are busy figuring out how to plumb our intimate depths on the basis of seemingly innocuous and mundane details – details that we hardly give a thought to. What’s more, some companies, like Facebook, aim to compile profiles of you even if you are a relative troglodyte, and engage in little or no digital commerce at all. If you do all you can and should to protect your privacy, even making the ultimate sacrifice of foregoing digital communications altogether, this may not be enough. Facebook will simply learn about you from your neighbors, friends, and family, who invoke you, or imply your existence.
The political state is a free-willed creation of reflective individual citizens. Such is the directive issued by Modern philosophy, and which we take for granted in liberal democracy. According to Social Contract theory, articulated principally by Hobbes, Locke and Rousseau, humans leave a state of nature to create and enter the political sphere, which they willingly, intelligently inaugurate through a mutual ‘contract.’ Living independently in a state of nature, while perhaps preferable in some respects, is ultimately unsustainable. For humans to effectively pursue and achieve personal ends, and find fulfillment, whatever shape that may take, they must sacrifice absolute freedom in nature, to live together in security. The salient point for the discussion at hand is that, as Social Contract theory has it, individuals conceive of their goals prior to or independently of the political community. The polis is reduced to a mere platform, if you will, a stage that permits or enables us to pursue what we want, in relative peace and harmony.
People have long sought out the public realm because of a desire for transcendence. The ancient Greeks sought it out because they wanted more than the oikos, or the family home, had to offer. Accordingly, the private realm was long deemed ‘privative’ in some essential way – it deprived us of what it means to be uniquely or distinctly human.1 In Classical times, the oikos was the realm of function and hierarchy. It was hierarchical because of the task at hand, the business of survival. But things were otherwise in the public realm, where men were free – for those lucky enough to be citizens, that is.2 When they entered the public realm, the realm of politics where freedom was exercised, people were released somewhat, or temporarily, from the tyranny of necessity, and could entertain higher matters and higher concerns – uniquely human concerns.
The tax system incentivizes automation, even in cases where it is not otherwise efficient. This is because the vast majority of tax revenue is derived from labor income. When an AI replaces a person, the government loses a substantial amount of tax revenue - potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once labor is capital. Robots are not good taxpayers. The solution is to change the tax system to be more neutral between AI and human workers and to limit automation’s impact on tax revenue. This would be best achieved by reducing taxes on human workers and increasing corporate and capital taxes.
This chapter explains the need for AI legal neutrality and discusses its benefits and limitations. It then provides an overview of its application in tax, tort, intellectual property, and criminal law. Law is vitally important to the development of AI, and AI will have a transformative effect on the law given that many legal rules are based on standards of human behavior that will be automated. As AI increasingly steps into the shoes of people, it will need to be treated more like a person, and more importantly, sometimes people will need to be treated more like AI.
This chapter defines artificial intelligence and discusses its history and evolution, explains the differences between major types of AI (symbolic/classical and connectionist), and describes AI’s most recent advances, applications, and impact. It also weighs in on the question of whether AI can “think,” noting that the question is less relevant to regulatory efforts, which should focus on promoting behaviors that improve social outcomes.