We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When analysing disinformation, commentators often focus on major platforms and their influence on content circulation. Some also examine institutional media, especially broadcasting. Platforms and media are both relevant; both are important in the communicative infrastructure underlying public speech. Whatever the focus, there is an almost endless examination of issues and suggestions regarding what to do about disinformation. Commentary defines false or misleading information in different ways, compares it with historic practices of propaganda and persuasion, considers the emergence of large language models and content they could generate, documents varied legal responses, and considers what should be done. Here, I examine something that is relevant to that work but often not considered directly.
The 2024 presidential election in the USA demonstrates, with unmistakable clarity, that disinformation (intentionally false information) and misinformation (unintentionally false information disseminated in good faith) pose a real and growing existential threat to democratic self-government in the United States – and elsewhere too. Powered by social media outlets like Facebook (Meta) and Twitter (X), it is now possible to propagate empirically false information to a vast potential audience at virtually no cost. Coupled with the use of highly sophisticated algorithms that carefully target the recipients of disinformation and misinformation, voter manipulation is easier to accomplish than ever before – and frighteningly effective to boot.
Many legal and political commentators dubbed Donald Trump’s false claim that he was the actual victor of the 2020 American presidential election, ‘the Big Lie’. No matter how he complained and dissembled, he lost. After losing the 2020 election, Trump went on a fundraising binge, asking his supporters to give to his legal defense fund so that he could litigate the results of the 2020 election, which he fraudulently claimed he had won. According to the House of Representatives’ January 6 Select Committee, this fund did not exist. As Select Committee member Congresswoman Zoe Lofgren put it, ‘the Big Lie was also a big rip-off’. Because the 2020 presidential election was not stolen, and the legal defense fund he touted was nonexistent, Trump’s post-2020 election fundraising was a fraud within a fraud – giving rise to a reasonable argument that it violated the federal wire fraud statute and also constituted common law fraud.
Today is a time of retrogression in sustaining rights-protecting democracies, and of high levels of distrust in institutions. Of particular concern are threats to the institutions, including universities and the press, that help provide the information base for successful democracies. Attacks on universities, and university faculties, are rising. In Poland over the last four years, a world-renowned constitutional law theorist, Wojciech Sadurski, has been subject to civil and criminal prosecutions for defamation of the governing party. In Hungary, the Central European University (CEU) was ejected by the government, and had to partly relocate to Vienna, and other attacks on academic freedom followed. Faculty members in a number of countries have needed to relocate to other countries for their own safety. Governments attack what subjects can be taught – in Hungary bans on gender studies; in Poland, a government minister issued a call to ban gender studies and ‘LGBT ideology’. Attacks on academics and universities, through government restrictions and public or private violence, are not limited to Poland and Hungary, but are of concern in Brazil, India, Turkey and a range of other countries. Attacks on journalists are similarly rising. These developments are deeply concerning. The proliferation of ‘fake news’, doctored photos and false claims on social media has been widely documented. Constitutional democracy cannot long be sustained in an ‘age of lies’, where truth and knowledge no longer matter.
The ‘marketplace of ideas’ metaphor tends to dominate US discourse about the First Amendment and free speech more generally. The metaphor is often deployed to argue that the remedy for harmful speech ought to be counterspeech, not censorship; listeners are to be trusted to sort the wheat from the chaff. This deep skepticism about the regulation of even harmful speech in the USA raises several follow-on questions, including: How will trustworthy sources of information fare in the marketplace of ideas? And how will participants know whom to trust? Both questions implicate non-regulatory, civil-society responses to mis- and disinformation. This chapter takes on these questions, considering groups and institutions that deal with information and misinformation. Civil society groups cannot stop the creation of misinformation – but they can decrease its potential to proliferate and to do harm. For example, advocacy groups might be directly involved with fact-checking and debunking misinformation, or with advancing truthful or properly contextualized counter-narratives. And civil society groups can also help strengthen social solidarity and reduce the social divisions that often serve as fodder for and drivers of misinformation.
In United States v. Alvarez, the US Supreme Court ruled that an official of a water district who introduced himself to his constituents by falsely stating in a public meeting that he had earned the Congressional Medal of Honor had a First Amendment right to make that demonstrably untrue claim. Audience members misled by the statement might well be considered to have a First Amendment interest in not being directly and knowingly lied to in that way. Other members of the community might be thought to have a First Amendment interest in public officials such as Xavier Alvarez telling the truth about their credentials and experiences. Nevertheless, as both the plurality and the concurring justices who together formed the majority in Alvarez viewed the case, it was the liar’s interest in saying what he wished that carried the day. Why is that? Crucial to answering this question is whether ‘the freedom of speech’ that the First Amendment tolerates ‘no law abridging’ is understood to be primarily speaker-centered, audience-centered, or society-centered.
This chapter details the formation of the MAS movement from the local teachers, students, artists, and activists to the national-level support (e.g., professional/scholarly organizations, hip hop/funk group Ozomatli, and cartoonist Lalo Alcaraz). Of particular importance was the formation of the “Tucson 11” – a group of MAS educators who filed a federal lawsuit challenging the constitutionality of the state law on First and Fourteenth Amendment grounds. Additionally, in this chapter, we explore both the importance of the documentary Precious Knowledge in supporting this movement and how the director’s alleged rape of one of the former MAS students was the beginning of lasting community wounds that ran throughout the movement.
On August 22, 2017, Judge Tashima issued a blistering ruling finding that state representatives created the law and banned MAS based upon racial animus and partisan political gain in violation of the First and Fourteenth Amendment rights of Mexican American students in TUSD. There was a massive local and national uproar, celebrating the end of this racist law. Though different Tucson factions claimed shared victory due to the ruling, persistent community divisions remained. This chapter details the post-ruling celebrations, the continued community divisions, a summary of where the key actors in this drama ended up, the current state of MAS in TUSD, and the national Ethnic Studies renaissance that the Tucson struggle spawned. Of equal importance, this chapter details how the lessons of the MAS controversy can help inform the work of those challenging Critical Race Theory bans throughout the country.
This chapter looks at how the police power has evolved in judicial interpretations and legislative enactments to the present day. It begins by exploring how the shifting approaches to regulatory governance more generally and also various state constitutional developments in the past two centuries affected thinking about the overall structure and purpose of state regulatory authority. It then turns to a number of critical areas in which the police power was used as a tool of protecting health, safety, welfare, and the common good. It begins with morals, a linchpin of traditional police power regulation, and then proceeds to discuss urban blight, occupational licensing, and public health emergencies
This chapter touches upon the very large topic of how individual rights interact with the police power. In what sense and to what degree do rights contravene state and local exercises of the police power? It is a shibboleth that regulatory power is constrained by rights. But this chapter interrogates these issues in more depth and detail, by discussing how rights claims are framed in connection with the police power and how the government’s assertions of power are circumscribed by particular doctrines and arguments in courts. Further, the chapter considers how the debate over the nature and content of so-called positive rights implicates the police power questions, questions concerning authority and content.
Germany’s content moderation law—NetzDG— is often the target of criticism in English-language scholarship as antithetical to Western notions of free speech and the First Amendment. The purpose of this Article is to encourage those engaged in the analysis of transatlantic content moderation schemes to consider how Germany’s self-ideation influences policy decisions. By considering what international relations scholars term ontological security, Germany’s aggressive forays into the content moderation space are better understood as an externalization of Germany’s ideation of itself, which rests upon an absolutist domestic moral and constitutional hierarchy based on the primacy of human dignity. Ultimately, this Article implores American scholars and lawmakers to consider the impact of this subconscious ideation when engaging with Germany and the European Union in an increasingly multi-polar cyberspace.
The United States’ free speech regime, as codified in the First Amendment to the United States Constitution, comes with obvious contrasts to Thailand’s ill-famed lèse–majesté law—Section 112 of the Thai Criminal Code—which prohibits defamation or even truthful degradation of the Thai King and Royal Family. Recent scholarship has focused on such differences and has largely depicted the two regimes as diametric opposites. When viewing the First Amendment and Thailand’s lèse–majesté law in temporal isolation, the recent scholarly consensus has significant merit. However, by analyzing the two regimes over time, similarities arise suggesting that both regimes represent each respective country’s attempt to accommodate competing and changing values present within the respective countries.
Dean John Wade, who replaced the great torts scholar William Prosser on the Restatement (Second) of Torts, put the finishing touches on the defamation sections in 1977.1 Apple Computer had been founded a year before, and Microsoft two, but relatively few people owned computers yet. The twenty-four-hour news cycle was not yet a thing, and most Americans still trusted the press.2
The term “content moderation,” a holdover from the days of small bulletin-board discussion groups, is quite a bland way to describe an immensely powerful and consequential aspect of social governance. Today’s largest platforms make judgments on millions of pieces of content a day, with world-shaping consequences. And in the United States, they do so mostly unconstrained by legal requirements. One senses that “content moderation” – the preferred term in industry and in the policy community – is something of a euphemism for content regulation, a way to cope with the unease that attends the knowledge (1) that so much unchecked power has been vested in so few hands and (2) that the alternatives to this arrangement are so hard to glimpse.
This chapter addresses an underappreciated source of epistemic dysfunction in today’s media environment: true-but-unrepresentative information. Because media organizations are under tremendous competitive pressure to craft news that is in harmony with their audience’s preexisting beliefs, they have an incentive to accurately report on events and incidents that are selected, consciously or not, to support an impression that is exaggerated or ideologically convenient. Moreover, these organizations have to engage in this practice in order to survive in a hypercompetitive news environment.1
What is the role of “trusted communicators” in disseminating knowledge to the public? The trigger for this question, which is the topic of this set of chapters, is the widely shared belief that one of the most notable, and noted, consequences of the spread of the internet and social media is the collapse of sources of information that are broadly trusted across society, because the internet has eliminated the power of the traditional gatekeepers1 who identified and created trusted communicators for the public. Many commentators argue this is a troubling development because trusted communicators are needed for our society to create and maintain a common base of facts, accepted by the broader public, that is essential to a system of democratic self-governance. Absent such a common base or factual consensus, democratic politics will tend to collapse into polarized camps that cannot accept the possibility of electoral defeat (as they arguably have in recent years in the United States). I aim here to examine recent proposals to resurrect a set of trusted communicators and the gatekeeper function, and to critique them from both practical and theoretical perspectives. But before we can discuss possible “solutions” to the lack of gatekeepers and trusted communicators in the modern era, it is important to understand how those functions arose in the pre-internet era.
The laws of defamation and privacy are at once similar and dissimilar. Falsity is the hallmark of defamation – the sharing of untrue information that tends to harm the subject’s standing in their community. Truth is the hallmark of privacy – the disclosure of facts about an individual who would prefer those facts to be private. Publication of true information cannot be defamatory; spreading of false information cannot violate an individual’s privacy. Scholars of either field could surely add epicycles to that characterization – but it does useful work as a starting point of comparison.
The commercial market for local news in the United States has collapsed. Many communities lack a local paper. These “news deserts,” comprising about two-thirds of the country, have lost a range of benefits that local newspapers once provided. Foremost among these benefits was investigative reporting – local newspapers at one time played a primary role in investigating local government and commerce and then reporting the facts to the public. It is rare for someone else to pick up the slack when the newspaper disappears.
An entity – a landlord, a manufacturer, a phone company, a credit card company, an internet platform, a self-driving-car manufacturer – is making money off its customers’ activities. Some of those customers are using the entity’s services in ways that are criminal, tortious, or otherwise reprehensible. Should the entity be held responsible, legally or morally, for its role (however unintentional) in facilitating its customers’ activities? This question has famously been at the center of the debates about platform content moderation,1 but it can come up in other contexts as well.2