Skip to main content Accessibility help
×
Hostname: page-component-68c7f8b79f-kpv4p Total loading time: 0 Render date: 2025-12-19T02:37:15.543Z Has data issue: false hasContentIssue false

Part I - Challenges to Democratic Institutions

Published online by Cambridge University Press:  aN Invalid Date NaN

Scott J. Shackelford
Affiliation:
Indiana University, Bloomington
Frédérick Douzet
Affiliation:
Paris 8 University
Christopher Ankersen
Affiliation:
New York University

Information

Type
Chapter
Information
Securing Democracies
Defending Against Cyber Attacks and Disinformation in the Digital Age
, pp. 17 - 116
Publisher: Cambridge University Press
Print publication year: 2026
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Part I Challenges to Democratic Institutions

2 Hacking Elections Contemporary Developments in Historical Perspective

Nowhere in the text of the U.S. Constitution is there an explicit reference to an affirmative right to vote. And yet, the Constitution and its amendments contain numerous provisions relating to the integrity of elections – what counts as a valid or legitimate electoral process. For the Framers of the Constitution, election integrity was fundamentally about ensuring that, if elections were held, only qualified persons could vote. As state legislatures and especially the Congress incrementally ruled out voting discrimination on the basis of whether a person owned land, was Black or another minority, was female, or was over eighteen years of age, those whose political fortunes were adversely affected by the expanding electorate have exploited features and bugs in the elections architecture of American democracy to uphold their own vision of election integrity – and in the process, disenfranchise eligible voters and influence election outcomes. Their tools have included poll taxes, literacy tests, identification requirements, residency conditions, disenfranchisement for certain criminal convictions, and a range of other measures.

The argument we advance in this contribution is that the twenty-first-century challenge of safeguarding elections from cyber threats must be understood as part of this history, and not solely as a niche engineering or information security problem. Especially since the 2016 presidential election, which brought cybersecurity risks to elections into the mainstream, political discourse about the magnitude of these risks and how best to mitigate them has become wrapped up in ideological conflict about who the threat actors are and what they aim to accomplish. It is against this backdrop of intensifying ideological conflict about the integrity and legitimacy of democratic procedures that efforts in 2016 and its aftermath to address cyber risks have occurred.

In information security, a hacker is someone who uses their skills and knowledge of digital systems to solve problems or achieve desired outcomes, even if it means subverting those systems by exploiting a vulnerability to undermine the confidentiality, integrity, or availability of the system or its information. Confidentiality ensures that systems and information are accessible only to authorized individuals or entities; it involves protecting systems and information from unauthorized access or disclosure. Integrity focuses on maintaining the accuracy, completeness, and trustworthiness of a system or information. It ensures that data remain unaltered, complete, and reliable. Availability ensures that systems and information are accessible and usable when needed. It involves protecting against disruptions, outages, or denial of service attacks that may render systems or data inaccessible.

Hackers may don a metaphorical white, black, or gray hat, depending on whether their actions and goals are rightful, wrongful, or somewhere in between. Rightful actions consist of compliance with legal and ethical norms concerning hacking and use of information technologies; a rightful motivation is identifying and responsibly showcasing vulnerabilities so that they can be fixed (“patched”). Wrongful actions and motivations consist of illegal or unethical conduct or goals. Gray hat hackers may employ illegal or unethical means to achieve rightful ends, or otherwise straddle a line between good and bad.

We port this concept over to election integrity and its preoccupation with hackers of a different kind: political hackers who use their skills and knowledge of law, psychology, and democratic procedures to subvert those procedures in pursuit of their political interests. Political hackers could be white, black, or gray hatted. Civil rights activists of the 1950s and 1960s like John Lewis are prototypical white hat hackers.

We begin with a review of American electoral history through 2016, with a focus on conceptions of election integrity and perceived threats to integrity. We then review developments that occurred between the 2016 and the 2020 presidential election cycles – an especially sensitive period for both election integrity generally and cybersecurity of elections specifically – with a focus on efforts to enhance the resilience of election infrastructure against cyber threats. As we shall see, the debate about the resilience of election infrastructure against cyber threats is at risk of becoming polarized along familiar themes from the history of American elections, with progressives viewing the principal threats as a combination of voter suppression by Republicans and their ideological allies and conservatives viewing voter fraud – people voting who are not eligible to vote – as the principal threat.

Constitutional Foundations of the Electoral Process

Elections are abundant in American democracy and are held for certain federal, state, and local government offices. The constitution gives Congress the power to regulate how elections are held for these offices and specifies the qualifications and procedures for elections for the legislative and executive branches. It does not, however, dictate to the states which offices at the state and local levels are to be filled by elected or appointed officials.

The constitution was negotiated over four months in 1787 and ratified by the states in 1789. Among the fifty-five delegates that the states had sent to negotiate on their behalf in 1787, there was cynicism about the compatibility of universal suffrage – the idea that all free adults, which at the time usually meant free White men could voteFootnote 1 – with their vision of a constitutional order that safeguarded the individual rights of propertied White men against populist forces that could be manipulated by malign actors. Most of the Framers, as these delegates came to be known, preferred to eliminate the vulnerability by restricting voting privileges to White property owners. James Madison expressed apprehension about the potential influence of populist appeals on the less affluent classes and argued that limiting voting to the wealthy was essential for preserving Republican liberty and election integrity. Similarly, Gouverneur Morris, the author of the preamble to the constitution, voiced the belief that the ignorant and dependent could not be trusted to act in the public interest, drawing a comparison to children (Klarman, Reference Klarman2016).

Benjamin Franklin was among the dissenters; he argued against constitutional restrictions on the franchise based on wealth (Klarman, Reference Klarman2016). He highlighted the significant contributions of the commoners in the fight for American independence and pointed out that the decisions of the wealthy were susceptible to external influences as well. In the end, Franklin’s view prevailed, with its rhetorical weight backed by the mathematics of ratification: Voting practices in many states had already extended suffrage beyond landowners, and expecting these voters to approve a constitution that would then strip them of their right to vote seemed wholly unrealistic. The Framers therefore left the responsibility of determining voting rights to the individual states. However, they devised a system wherein state legislators, rather than the voting public, would select US senators and the president. This was intended to serve as a safeguard against potential disruptions caused by populist elements within the electorate.

Hacking the 15th Amendment

The 15th Amendment, ratified in 1870 as the last of the Reconstruction Amendments,Footnote 2 prohibited states and the federal government from denying or limiting a citizen’s right to vote “on account of race, color, or previous condition of servitude.” The Reconstruction Amendments did not stop states from making it difficult or impossible for Black Americans to vote, however. The states’ hack was to erect a host of barriers to voting that were keyed to vulnerabilities experienced disproportionately by Black Americans, as opposed to blatant race-based bans on voting: poll taxes that exploited the poverty of most Black Americans at the time, literacy tests to exploit the legacy of brutal repression of literacy among enslaved people and the limited educational opportunities of their progeny, and organized violence by non-state actors (such as the Ku Klux Klan) to deter Black participation in elections, among other measures. Especially in the American South, so-called grandfather clauses restricted voting to those whose grandfathers had voted, which had the effect of disenfranchising generations of Black Americans with grandfathers who had been enslaved and ineligible to vote.

The Supreme Court ruled in 1915 that grandfather clauses were unconstitutional. The Court reasoned “the grandfather clauses in the Maryland and Oklahoma constitutions to be repugnant to the Fifteenth Amendment and therefore null and void.”Footnote 3 In 1920, voters extended the right to vote to women with the 19th Amendment, which “prohibits the United States and its states from denying the right to vote to citizens of the United States on the basis of sex.” This followed the establishment in 1913 of direct popular election of senators upon ratification of the 17th Amendment.

The poll tax exploited a vulnerability that was pervasive in the American South through much of the twentieth century and cut across racial lines: poverty. In 1959, 56.2 percent of Black Americans and other people of color lived below the poverty line. This proportion of persons below the poverty line eventually declined to 13 percent in 1968, but the poverty rate among Blacks and other people of color remained about three times the rate among White Americans (U.S. Department of Commerce, 1969). As of 1966, eleven states in the American South had poll taxes. Americans outlawed the practice for federal elections in 1964 with the 24th Amendment, but five states – Alabama, Arkansas, Mississippi, Texas, and Virginia – retained poll taxes for state elections (Lebetter Jr., Reference Lebetter1995). In Harper v. Va. Board of Elections, the Supreme Court invalidated the practice for states, on the basis that “once the franchise is granted to the electorate, lines may not be drawn which are inconsistent with the Equal Protection Clause of the Fourteenth Amendment.” The Court held that “a State violates the Equal Protection Clause of the Fourteenth Amendment whenever it makes the affluence of the voter or payment of any fee an electoral standard,” because “voter qualifications have no relation to wealth nor to paying or not paying this or any other tax” and thus have no rational basis other than to discriminate against poor people.Footnote 4

The 26th Amendment, ratified in 1971 amid social turmoil in the United States over the Vietnam war and US reliance on a conscript military, lowered the voting age to eighteen, giving Americans eligible for the military draft the right to vote in state and federal elections – and thus have a say in national policy on the deployment of US armed forces (Baum, Cea, & Cohen, Reference Baum, Cea and Cohen2021).

This collection of constitutional requirements and accompanying Supreme Court case law establishes the constitutional floor for voting rights in the United States. The constitution otherwise defers to state and federal legislators on most ballot decisions, saying that the “times, places and manner” of elections are state matters unless Congress sets nationwide standards. This entails that the selection, implementation, and oversight of elections and polling-related infrastructure (e.g., voting machines) are also left to the states, which, in turn, often leave many aspects of election administration to local (often county) governments.

Electoral Infrastructure Basics

Election infrastructure in the United States rests on this constitutional foundation and is usefully broken down into several discrete components: partisan campaign infrastructure and interest groups; voter registration; ballot casting; and ballot counting and certification of election results. Each of these components is subject to different state and federal laws, as well as having different cybersecurity and other risk attributes pertaining to the vulnerability of election infrastructure.

Partisan Campaign Infrastructure, Interest Groups, and the Media

Partisan campaign organizations generally determine how their political party nominates candidates to compete in the general election for a given office. Candidates’ campaign organizations provide the management infrastructure and support for their candidates to run for office. Core campaign functions such as fundraising, advertising, and get-out-the-vote initiatives are the responsibility of what might be termed the partisan elements of election infrastructure. These partisan elements are subject to a variety of state and federal laws relating to such matters as fundraising, advertising, coordination and contact with outside interest groups engaged in electioneering activities, and voter registration drives. Interest groups, political action committees (PACs), and other nonpartisan organizations such as think tanks may seek to influence public opinion, candidates’ policy preferences, and election outcomes by endorsing candidates, issuing position papers, producing policy research, advertising, and otherwise engaging in the political process; they too operate in a distinct legal context. Objective, fact-based media is essential to ensuring that voters have access to timely and accurate information about candidates, issues, election processes, and election outcomes. The First Amendment’s heightened protections for political speech are a core element of the constitutional foundation for partisan campaign infrastructure and interest groups.

Voter Registration and Identity

Persons eligible to vote must register to cast a ballot and have their vote counted toward determining the winner of the election. The United States employs a decentralized voter registration system, with each state responsible for maintaining its own registration rolls. State-level agencies, such as election boards or secretaries of state, oversee the voter registration process. To register to vote in the United States, individuals must meet certain eligibility requirements, which typically include being a US citizen, meeting the minimum age requirement of eighteen years old, and being a resident of the state or jurisdiction in which they wish to vote. Some states also require individuals to provide proof of identity or residency during the registration process. States provide various methods for citizens to register to vote, including in-person, online, and mail-in registration.

Individuals can register to vote in person at designated government offices, such as election offices, Department of Motor Vehicle (DMV) offices, or public assistance agencies. Many states also offer online voter registration systems, allowing eligible individuals to register conveniently through secure websites. Some states allow individuals to register to vote by mailing in registration forms obtained from election offices or through voter registration drives. States are also responsible for maintaining accurate voter rolls by regularly reviewing and updating voter registration records. This process includes removing ineligible or deceased voters and updating address changes.

Voter identity and pollbooks are key components of the voting process in the United States and help to ensure the integrity and accuracy of elections. Voter identity verification refers to the process of confirming the identity of individuals who show up to vote. The purpose is to prevent voter fraud and ensure that only eligible voters cast their ballots. The specific requirements and methods for verifying voter identity vary across states. Some states have implemented strict voter identification laws, while others have more lenient or no specific requirements. Voter identification laws, enacted by individual states, determine the types of identification documents that voters must present at the polling place. These laws aim to verify the identity of voters and prevent fraudulent voting. The specific requirements and accepted forms of identification differ among states. Some states require a photo ID, such as a driver’s license or a state-issued ID card, while others accept non-photo IDs, utility bills, or other documents showing the voter’s name and address. Pollbooks are registers or electronic databases that contain voter information, such as names, addresses, and registration status. They serve as a reference for poll workers to verify voter eligibility and ensure that individuals are registered to vote in a particular precinct or district. Pollbooks help prevent individuals from voting more than once and allow election officials to track and update voter participation (U.S. Election Assistance Commission, 2023). Around three-quarters of registered voters live in jurisdictions where pollbooks are used (Verified Voting, 2020).

Ballot Casting

Voting machines are used in the United States to facilitate the casting and counting of votes in elections. It is up to individual states to decide which machines to purchase; states are also responsible for maintaining these systems and ensuring their readiness for election day. Nine states and the District of Columbia require testing against federal standards (the Voluntary Voting System Guidelines (VVSG), discussed later), sixteen require testing by a federally accredited laboratory, and twelve require full federal certification (Verified Voting, 2021).

For most of the nation’s history, voting machines were mechanical devices that a voter used to mark a ballot. In the past two decades, however, the United States has replaced a substantial percentage of wholly mechanical machines with ones incorporating digital functionalities. These include optical scan machines, direct-recording electronic (DRE) machines, and ballot marking devices (BMDs). Optical scan machines read marked paper ballots, which are manually filled out by voters. The machines scan and tabulate the votes recorded on the paper ballots. DRE machines are electronic devices that allow voters to make their selections directly on a touchscreen or through other input mechanisms. These machines store and tally the votes electronically. BMDs assist voters, including those with disabilities, in marking their ballots electronically. Voters use the device to make their selections, which are then printed on a paper ballot for tabulation.

Vote Counting and Certification of Election Results

Vote counting involves the aggregation and tabulation of individual votes to determine the outcome of an election. The specific methods and technologies used can vary by state and jurisdiction. Common vote counting methods include manual counting of paper ballots, electronic scanning of marked ballots, or the use of electronic voting machines that tally votes electronically. Election laws and procedures govern the entire vote counting process. These laws vary at the federal, state, and local levels, and they cover areas such as canvassing and certification, recounts and audits, and reporting requirements.

Canvassing refers to the official examination and verification of election results, including the validation and counting of provisional or absentee ballots. Once the canvassing process is completed, election officials certify the results, declaring the winners. Election laws often provide procedures for recounting votes in cases where the margin of victory is close or when requested by candidates. Recounts can be conducted manually or through machine recounts. Some states also require postelection audits to verify the accuracy of the voting system and ensure the integrity of the results. Election laws mandate the reporting of election results to the appropriate authorities and the public. These requirements include timelines for reporting, formats for result presentation, and mechanisms for transparency and public access to the reported data (U.S. Election Assistance Commission, 2022). A state official, often a state’s secretary of state, certifies the results of elections.

News organizations track these counting processes and results so that they can report on developments and election outcomes in a timely manner. For example, the Associated Press maintains a network of stringers – thousands of local reporters with “first-hand knowledge of their territories and trusted relationships with county clerks and other local officials” – and public sources such as state and county websites for its reporting on elections (Associated Press, n.d.).

Vulnerabilities

As we shall see subsequently, hackers have exploited vulnerabilities in each component of election infrastructure for centuries, but the digitalization of this infrastructure adds an additional attack surface for threat actors to attempt to exploit. For example, campaign organizations strive to keep their internal communications confidential, but threat actors might try to breach confidentiality to collect intelligence on campaign strategy and policy preferences – as China did with its cyber intrusions into campaigns of presidential candidates Barack Obama and John McCain in 2008 (Isikoff, Reference Isikoff2013) – or distract and embarrass the campaign by leaking information – as Russia did in 2016 when it hacked the Democratic National Committee and leaked emails (Nakashima & Harris, Reference Nakashima and Harris2018). Voter registration purges can be performed for the legitimate purpose of ensuring that registration rolls are up to date, but the technique can be hacked by unscrupulous election officials to purge eligible voters who are likely to vote for the opposition. Hackers might also target voter registration rolls, as Russia did in 2016, and potentially alter them in ways that threaten their integrity (McFadden, Arkin, & Monahan, Reference McFadden, Arkin and Monahan2018).

These are just a few of the many ways in which hackers can identify and exploit vulnerabilities in election infrastructure. We discuss additional examples later in the chapter.

Breaking down election infrastructure into discrete components has analytic value, but it is also important to consider election infrastructure as a systemic whole, especially when overall public confidence in the integrity of an election is at stake. A threat against any one component of election infrastructure could shake confidence in the integrity of the voting process and the legitimacy of the outcome.

White Hats Hack Back

The Congress, the executive branch, and the federal courts have frequently intervened against efforts by states to curtail voting, especially when those efforts are animated by racism toward Black Americans. The Civil Rights Act of 1870 was Congress’ first attempt at enforcing the 15th Amendment’s ban on denying Black Americans the right to vote;Footnote 5 subsequent amendments in 1957, 1960, and 1964 further expanded federal protections for Black Americans’ voting rights.Footnote 6 Congress enacted the Voting Rights Act of 1965 (VRA) to ban racial discrimination in the administration of elections and enforce the 14th and 15th Amendments; it has amended the VRA five times since then to expand its protections (Waldman et al., Reference Waldman, Weiser, Morales-Doyle and Sweren-Becker2021b).

Legislation enacted in 1984 required polling places to be accessible to people with disabilities, and the 1986 Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) permitted service men and woman and overseas US voters to register and vote by mail. In 1993, Congress legislated federal voter registration guidelines in the National Voter Registration Act of 1993, which required each state to have a designated official for overseeing election administration and streamlined the voter registration process (National Conference of State Legislatures, 2022).

Following the controversial presidential election of 2000, where “hanging chads” in Florida required Supreme Court intervention to decide the winner,Footnote 7 Congress enacted the Help America Vote Act (HAVA), which aimed to bring about significant changes in election administration. Congress has not enacted major new voting rights legislation since HAVA.Footnote 8

Meanwhile, the Supreme Court has intervened in ways that tilt constitutional momentum away from protecting voting rights – especially among Black Americans – and toward giving states greater discretion to make changes to their election infrastructure to save money and guard against voter fraud, even when those changes foreseeably erect barriers for eligible voters, especially Black Americans, to cast their ballots.

John Lewis, White Hat Hacker

In June 1964, three voting rights activists were murdered in Mississippi for helping Black Americans register to vote.Footnote 9 The violent episode galvanized support for the Civil Rights Act, which Congress passed, and President Lyndon B. Johnson signed, one month later.Footnote 10 The Civil Rights Act outlawed segregation in public places and in businesses that serve the general public and banned discrimination on the basis of race, color, religion, sex, or national origin in employment. It also required that laws relating to voter qualification be applied equally within a jurisdiction, banned the practice of using errors in registration documents and ballots to disqualify a Black American from voting even when an error was not material to that person’s eligibility to vote, and outlawed literacy tests unless all voters had to take them and certain transparency requirements about them were met.

On March 7, 1965, John Lewis, a prominent civil rights activist and future Member of Congress,Footnote 11 led a march of over 600 people across the Edmund Pettus Bridge in Selma, Alabama to protest racial discrimination in the American South. The peaceful protesters encountered violent resistance at the top of the bridge from Alabama state troopers, whose unprovoked brutality against the procession was captured by television cameras and broadcast nationwide, turning the event in Selma into a pivotal moment in civil rights history. Outrage over the local incident, known as Bloody Sunday, swept the nation and further fired up support for the civil rights movement (Klein, Reference Klein2020). Congress passed the VRA five months later.

The VRA patched numerous vulnerabilities in America’s election infrastructure at the time. Aimed at safeguarding the voting rights of racial minorities, it prohibits discriminatory measures in elections that hinder their ability to cast ballots, such as literacy tests and poll taxes. The VRA also introduced federal oversight of states with a history of voter suppression, requiring them to seek federal approval before changing their voting laws. This “preclearance” requirement aimed to ensure fair and equitable access to the ballot box. The immediate impact of the VRA was substantial, leading to the registration of 250,000 Black Americans by the end of the year and ensuring that by the end of 1966, only four out of the thirteen southern states had less than 50 percent of eligible Black Americans registered to vote (National Archives, 2022).

In 2013, the Supreme Court ruled 5–4 in Shelby County v. Holder that the VRA’s preclearance requirements were unconstitutional because the coverage formula was, in the majority’s view, “based on 40-year-old facts having no logical relationship to the present day.”Footnote 12 The decision made it possible for state governments previously subject to preclearance requirements to move forward with a variety of measures that made it more difficult for eligible voters to cast a ballot, with the effects being disproportionately felt by Black Americans, who voted for Democratic candidates 86 percent of the time in 2022 (Alexander & Fields, Reference Alexander and Fields2022). These measures – which included voter registration purges, polling location closures, limitations on absentee and mail-in ballots, and restrictions on the ability of third-party groups to facilitate registration and ballot casting – were predominantly the product of Republican-led state governments, who justified them as saving money or guarding against voter fraud (Waldman et al., Reference Waldman, Weiser, Moraels-Doyle and Sweren-Becker2018).

In the decade since Shelby, the ideological composition of the Court has shifted further rightward. Senate Republicans refused to advance President Obama’s nomination of Merrick Garland to the Court in 2016 to replace the late Justice Antonin Scalia, and so President Trump was able to replace Justice Scalia with another conservative, Neil Gorsuch. President Trump replaced the retiring Justice David Kennedy, a moderate, with the conservative Brett Kavanaugh and the late Justice Ruth Bader Ginsburg, a liberal, with the conservative Amy Coney Barrett. In a 2021 decision from the Court’s six conservative justices, Brnovich v. Democratic National Committee, the Court upheld Arizona’s policy of discarding out-of-precinct ballots and prohibiting third-party groups from returning early ballots for another individual, which had been challenged under the VRA as racially discriminatory due to the disproportionate effect the policies have on persons of color.Footnote 13

Since 2021, lawmakers in at least nineteen states have passed thirty-three laws that make it more difficult for people to vote. In Iowa and Kansas, election officials who typically assist voters with ordinary and essential tasks in the electoral process, such as returning ballots on behalf of voters with disabilities, are now deterred from doing so by the threat of criminal charges. Similarly in Texas, election officials that attempt to regulate poll watcher conduct or encourage voting by mail can now face criminal prosecution. These laws disproportionately affect voters of color. Again in Texas, the recently enacted Senate Bill 1 (SB 1) makes it more difficult for those who face language barriers to get help to cast a ballot. The law additionally bans twenty-four-hour and drive-thru voting (Waldman et al., Reference Waldman, Weiser, Morales-Doyle and Sweren-Becker2021a).

The policy objectives of protecting and expanding access to voting and preventing those who are ineligible to vote from voting are not necessarily in conflict. After all, votes cast illegally dilute the political power of eligible voters. Even so, there are trade-offs between these two objectives, which is where the ideological conflict lies. On the American left, the emphasis has long been on protecting and expanding access, with an implicit acceptance that this could in theory result in a higher incidence of voter fraud. The principal adversaries are partisan (Republican) legislators and elected officials. Voter fraud is a federal crime and an exceedingly rare one at that: In a 2007 report, the Brennan Center for Justice found incident rates of fraud between 0.0003 percent and 0.0025 percent (Levitt, Reference Levitt2007).

On the American right, the emphasis is on preventing fraud, even if it means that otherwise eligible voters encounter more barriers to casting their ballot. The principal adversaries are fraudsters and their partisan (Democratic) enablers. To the extent that these additional barriers are encountered primarily by the right’s political opponents, it could be a feature rather than a bug of their efforts to address fraud. Following the Supreme Court’s decision in Brnovich, Texas Governor Greg Abbott, a member of the Republican Party, approved the implementation of SB 1, a law aimed at imposing restrictions on voting methods and schedules (Wilder & Hira, Reference Wilder and Hira2021). The Texas Tribune reported that the legislation targets voting initiatives employed in Harris County, a populous and diverse county with a Democratic lean (Ura, Reference Ura2021). The law prohibits overnight early voting hours and drive-thru voting, which were both well-received by voters of color in the previous year’s elections.

Hanging Chads and the Help America Vote Act (HAVA)

On November 20, 2000, the State of Florida officially declared George W. Bush as the winner of its twenty-five electoral votes in the race for the U.S. Presidency. The declaration came three weeks after a tumultuous election night on November 7, with reports of significant problems plaguing Florida’s administration of the election. The Florida Division of Elections reported that Governor Bush had beaten Vice President Gore by 1,784 votes, which was less than one-half of a percent of the votes cast. This triggered an automatic machine recount under Florida elections law. The machine recount showed Governor Bush still winning, but by a substantially diminished margin of just 537 votes.Footnote 14 Vice President Gore sought manual recounts in several key counties, but this would take more time; the State of Florida and Governor Bush opposed the manual recount on the grounds that there was no basis in law for extending the deadline for local county canvassing boards to submit their results to the Florida secretary of state – the official responsible for election administration in Florida. The dispute was eventually decided in a 5–4 decision by the Supreme Court in Bush v. Gore, which ordered a stop to the manual recount underway in Florida and effectively made Governor Bush the winner of Florida’s electoral votes and, as a result, the winner of the presidential election (for a timeline of this episode, see Stanford Law School, 2021). As the Supreme Court observed in Bush v. Gore, “This case has shown that punchcard balloting machines can produce an unfortunate number of ballots which are not punched in a clean, complete way by the voter,”Footnote 15 resulting in ballots with paper fragments still attached – so-called hanging chads. County officials grappled with deciphering voter intent amid hanging chads and the controversial “butterfly ballot” design in Palm Beach County, which caused confusion and inadvertently affected thousands of votes.

The U.S. Commission on Civil Rights (USCCR) investigated the administration of Florida’s election and issued a report in June 2001 that documented widespread problems. In the years before the election, Florida had purged from its voter registration rolls tens of thousands of individuals suspected to be ineligible to vote, but the purge snared thousands of eligible voters as well. The USCCR found that Black voters were placed on purge lists “more often and more erroneously than Hispanic or White voters” (U.S. Commission on Civil Rights, 2002). A person on the so-called exclusion list had to affirmatively prove their eligibility to be relisted in voter registration rolls – a nontrivial burden. The USCCR also found that Black voters’ ballots were ten times more likely to be rejected on technical grounds, such that of the 180,000 spoiled ballots cast in Florida during the November 2000 election, over half were cast by Black Americans despite comprising around 11 percent of Florida voters (U.S. Commission on Civil Rights, 2002).

Elections Go Digital and a New Threat to Election Integrity Emerges

In 2002, the Federal Election Commission (FEC) approved the Voting System Standards (VSS) on a partisan 3–2 vote, with the Republican commissioners critiquing the VSS as too burdensome on state and local governments and downplaying the problems that the 2000 presidential election had exposed (Federal Election Commission, 2002). The partisan split at the FEC helped catalyze Congress to act (Weiner, Reference Weiner2019), and its main answer to the problems that the 2000 election exposed in US election administration was the bipartisan HAVA of 2002 (U.S. Election Assistance Commission, 2023). HAVA is the most ambitious effort yet to modernize America’s election infrastructure, and has three principal aims: “establish a program to provide funds to States to replace punch card voting systems,” which produced the infamous chads; “establish the Election Assistance Commission to assist in the administration of Federal elections” and provide related support; and “establish minimum election administration standards for States and units of local government with responsibility for the administration of Federal elections.”

Going chadless meant embracing digital systems, using them to replace punch card voting systems. By January 2004, states were also obligated to establish computerized statewide voter registration lists to streamline and centralize the registration of voters. These lists were to be integrated with other agency records to verify the accuracy of information provided on voter registration applications. HAVA shifted the responsibility of defining, maintaining, and administering the voter registration lists from local officials to the states themselves. The legislation also mandated regular maintenance of the statewide lists to ensure ineligible voters and duplicate names were removed.

Cybersecurity and the Help America Vote Act (HAVA)

HAVA sought to facilitate the digitalization of election infrastructure presented by creating a new federal agency, the Electoral Assistance Commission (EAC), comprising four commissioners subject to Senate confirmation and a specialized staff, and creating new election-related responsibilities for the National Institute of Standards and Technology (NIST), an existing federal agency with deep experience developing digital governance standards, guidelines, and best practices. Among other responsibilities, the EAC would test and certify voting equipment as compliant with standards that the legislation tasked NIST to develop on the security, reliability, and accessibility of voting systems as chair of a Technical Guidelines Development Committee (TGDC).

HAVA set a deadline of nine months from the appointment of the four EAC commissioners for the TGDC to provide its first set of recommendations on voting system security, reliability, and accessibility. To meet this deadline, the TGDC formed three subcommittees focused on core requirements and testing, human factors and privacy, and security and transparency, hosting a series of workshops, meetings, and teleconferences in accordance with NIST’s established practice of developing standards, guidelines, and best practices through open and transparent processes. The effort produced the VVSG 2005, which the TGDC submitted to the EAC in May 2005, meeting the established timeline.

The VVSG 2005 focused on usability, accessibility, and security of election systems. The goal of the usability-related guidelines was to ensure that the system was user-friendly and intuitive, allowing voters to interact with ease. Additionally, the guidelines emphasized the need for the system to incorporate error alert mechanisms, such as notifying voters about overvoting to reduce the number of improper ballots. The accessibility guidelines detailed requirements aimed at ensuring that individuals with limited vision and other disabilities, or non-English-speaking voters, would have equal access to the voting process – including accommodations to protect their privacy.

The security section of the VVSG 2005 provided explicit guidelines regarding the distribution and validation of voting system software. These requirements were put in place to guarantee that states and localities receive the accurate and verified version of the voting system software that has undergone testing and certification. By ensuring the correct software deployment, the integrity of the voting process is safeguarded, as the software used in the voting systems aligns with the approved and validated version.

Furthermore, the security section included provisions for validating the setup of the voting systems. This involved conducting inspections of the voting system software after it has been loaded onto the voting systems, verifying that it corresponds to the tested and certified software. This additional validation step played a crucial role in enhancing the security and reliability of the voting systems, reinforcing confidence in the accuracy and integrity of the election results. By emphasizing these security measures, the VVSG 2005 aimed to mitigate potential risks and protect the integrity of the voting process (U.S. Election Assistance Commission, 2005).

To help states meet HAVA’s requirements, the law authorized $3.9 billion in funding for a grants program that the EAC would administer. Congress appropriated $2.8 billion for the first phase of the grants program to help states and localities purchase new voting equipment, upgrade their voter registration systems, and make other improvements to their election infrastructure. In 2006, Congress appropriated $1.1 billion, which was used to continue the work of phase one and fund new initiatives such as the Help America Vote College Program (HAVCP). In total, the EAC has awarded $3 billion in grants to state and local governments.

Vulnerabilities in Digital Systems Identified

The passage of HAVA in 2002 and its push for digitizing voting systems led to greater interest among security researchers in the security of voting systems. For example, Bannet and coauthors demonstrated the relative ease of injecting malicious code in DRE systems and the difficulty of detecting them. Bannet et al. (Reference Bannet, Price, Rudys and Singer2004), and Kohno et al. (Reference Kohno, Stubblefield, Rubin and Wallach2004) demonstrated major security problems in a Diebold DRE system by studying source code that an independent researcher, Bev Harris, found online.Footnote 16 In 2006, it was expected that electronic voting would be utilized by approximately 80 percent of American voters in the upcoming midterm elections, where control of the House of Representatives was on the line (Tapper & Venkataraman, Reference Tapper and Venkataraman2006). In the run-up to the election, researchers identified new problems with the security and reliability of electronic voting systems. Princeton Professor Edward Felten and graduate students Ariel Feldman and Alex Halderman acquired a Diebold AccuVote-TS machine from an undisclosed source and successfully identified methods to rapidly upload malicious programs onto the machine (Feldman, Halderman, & Felten, Reference Feldman, Halderman and Felten2006). They discovered that malicious programs could be installed by gaining access to the machine’s memory card slot and power button, which were located behind a locked door on the side of the machine. The lock could be easily picked in a mere ten seconds, enabling unauthorized access. Once inside, the installation of the malicious software took less than a minute to complete.

The researchers developed software capable of modifying all records, audit logs, and counters stored by the voting machine. They ensured that even a thorough forensic examination would not detect any tampering. These programs had the ability to change vote totals or cause machine malfunctions, which could potentially impact the outcome of an election, especially if the compromised machines were located in critical polling stations. Furthermore, the researchers discovered the possibility of spreading malicious programs to multiple machines through a computer virus. This could be achieved by piggybacking on a new software download or an election information file being transferred between machines.

These studies are part of a line of research documenting security shortcomings in election infrastructure, with a primary focus on voting systems – the machines used to record and potentially document and count votes – and how an attacker could subvert those systems to affect election outcomes or undermine the public’s perception of the integrity of an election. For example, a study from the Brennan Center for Justice on the security of voting systems documented numerous attack scenarios on voting systems that resulted in changed votes (Lawrence, Reference Lawrence2006). The State of Ohio’s EVEREST study examined the usability, stability, and security of voting systems and found that every voting system used in Ohio had “critical security failures that render their technical controls insufficient to guarantee a trustworthy election” (McDaniel et al., Reference McDaniel2007, p. 3). Its threat model focused on how a threat actor could exploit cyber vulnerabilities to influence election outcomes by producing incorrect vote counts or blocking eligible voters, or undermine integrity by delaying the results or violating the secrecy of the ballot (McDaniel et al., Reference McDaniel2007, pp. 14–15). California’s “Top-to-Bottom Review” in 2007 of voting systems used in California yielded similar results (Bowen, Reference Bowen2007) and focused on how an attacker might change election outcomes (Bishop, Reference Bishopn.d., p. 1);Footnote 17 it also “made no assumptions about constraints on the attackers” and did not consider the likelihood of any of its attacks (Bishop, Reference Bishopn.d., p. 2). Florida and Connecticut pursued studies of their own, with a comparable threat model and similar results about the poor cybersecurity of voting systems. In 2015, Norden and Famighetti used interviews and data from the Verified Voting Foundation to document a range of vulnerabilities across election infrastructure (Norden & Famighetti, Reference Norden and Famighetti2014). They too emphasized the risks of altered election outcomes and soured confidence, but in ranking the severity of threats, they also incorporated such factors as feasibility and cost. Finally, in the run-up to the 2016 elections, several other researchers and research groups issued recommendations about shoring up election systems considering the growing cybersecurity threat (National Institutes of Science and Technology, 2016).

A common thread running through many of these studies is a narrow focus on how an adversary might exploit cyber vulnerabilities to affect election outcomes and undermine confidence, in isolation from other means to achieve the same outcomes. As we have seen, however, hackers have exploited vulnerabilities in electoral infrastructure for centuries to affect election outcomes and challenge confidence in the integrity of elections: Hostile, black hat measures such as poll taxes and grandfather clauses were intended to affect election outcomes in favor of a dominant White majority, while white hat actions by civil rights leaders such as John Lewis exposed legitimacy shortcomings in electoral infrastructure that led to legislative reforms. The researchers behind the studies invested time and energy to acquire voting systems or their source code, identify vulnerabilities, and develop techniques to exploit them. A malicious hacker would have to as well, and if their goal is to subvert democracy, they might reasonably conclude that there are proven and potentially more cost-effective ways to do so than carrying out cyberattacks against voting machines: Why go to the trouble, risk, and expense of hacking a voting machine when partisan gerrymandering or selective shuttering of polling places could yield a comparable result?

This reasoning could explain why the initiative that arguably most reflected the cyber policy zeitgeist in Washington, DC for much of the period between the enactment of HAVA and the 2016 presidential election did not mention voting system security at all. The Center for Strategic and International Studies convened a commission of experts chaired by the bipartisan pair of Representatives Jim Langevin (D-RI) and Mike McCaul (R-TX) in advance of the 2008 presidential elections to provide analysis and recommendations to the new presidential administration. The commission’s report emphasized “the militaries and intelligence services of other nations” as the main cyber adversaries, and highlighted American economic competitiveness and US critical infrastructure – “electricity, communications, and financial services” as being most at risk from cyberattacks (Center for Strategic and International Studies, 2008). At the time, however, the US government did not consider election infrastructure to be part of critical infrastructure – that infrastructure’s designation as critical by the Department of Homeland Security (DHS) would have to wait until shortly after the 2016 election (U.S. Election Assistance Commission, 2022).

The 2016 Presidential Election and Its Aftermath

Russia hacked the 2016 election at Vladimir Putin’s direction to damage Hillary Clinton and undermine Americans’ confidence in the legitimacy of the electoral process. How much the Russians contributed to Clinton’s loss to Donald Trump and the precipitous drop in the aftermath of the campaign in Americans’ trust in electoral infrastructure is a matter of ongoing debate; the fact of those outcomes, however, is not.

Understandably, the events of 2016 provoked unprecedented interest in the resiliency of electoral infrastructure against digital risks – not only cyberattacks on voting systems or other infrastructure but also the use of social media platforms to spread falsehoods and lies. It also injected partisan discord into what had previously been a largely technocratic and, to some election security researchers, an often quixotic quest to shore up election infrastructure against digital hackers – the salience of the research would resonate in even-numbered years in the run-up to an election, and then taper off once the election passed. As we have seen, the impetus for HAVA and its drive toward digitizing electoral infrastructure stems from the 2000 presidential election, where a contest for the president was lost due to voting system irregularities amid bitter partisan debate over the cause and consequences of those irregularities. HAVA was supposed to fix those irregularities, but its push for digitizing elections, warned security researchers, introduced new risks: voting systems that were vulnerable to hacking. The implied threat actor in this research on the machines that record and tabulate votes was not a foreign government, but a domestic political partisan trying to boost their preferred candidate. The cyber policy community, meanwhile, was focused primarily on foreign military and intelligence services as the primary threat actor and critical infrastructure – which didn’t include electoral infrastructure at the time – and American economic competitiveness as the threat actors’ main targets. Therefore, 2016 caught both the technocratic cyber research and cyber policy communities off guard: As it happened, it was a foreign government that sought to subvert electoral infrastructure through digital means. And the infrastructure it targeted was not voting systems, but other components of electoral infrastructure – notably, voter registration databases and partisan campaign infrastructure.

What Happened in 2016

The story of the 2016 election and its brush with cyberattacks is documented in the Senate Select Committee on Intelligence’s bipartisan opus, “Russian Active Measures Campaigns and Interference in the 2016 U.S. Election” (Senate Select Committee on Intelligence, 2020). The condensed depiction presented here is intended to further contextualize efforts to shore up electoral infrastructure in the aftermath of 2016.

The digital elements of the Russian campaign involved cyberattacks aimed at compromising the confidentiality, integrity, or availability of targeted systems and information operations aimed at denigrating Hillary Clinton, exacerbating political polarization, and undermining the public’s confidence in the integrity of democratic procedures. In June 2016, a hacker posted online a trove of nonpublic documents from the Democratic National Committee (DNC). One month later, a hacker calling themselves Guccifer 2.0 claimed to have shared 20,000 nonpublic DNC emails with WikiLeaks, the notorious publisher of stolen information (Thielman, Reference Thielman2016). Cybersecurity researchers and the US government have concluded that the Russian government was behind these operations, though Moscow denies the allegations. The emails, consisting of 19,252 messages and 8,034 attachments, were leaked on October 7 by WikiLeaks and a pop-up sister site called DCLeaks (Krawchenko et al., Reference Krawchenko, Judd, Cordes, Goldman, Flores and Shabad2016). The leaked correspondence revealed a bias within the DNC toward Hillary Rodham Clinton over her primary rival, Bernie Sanders, despite the DNC’s earlier claims of neutrality, and contained other embarrassing vignettes about the inner workings of the DNC. The timing of the leaks was noteworthy, happening within hours of video footage airing Clinton’s rival Donald J. Trump boasting about sexually assaulting various women. Later that day, a statement from Federal Bureau of Investigation (FBI) Director James Comey, DHS Secretary Jeh Johnson, and Director of National Intelligence (DNI) James Clapper warned that the US government was “confident that the Russian Government directed the recent compromises of e-mails from U.S. persons and institutions, including from U.S. political organizations” (Department of Homeland Security & Director of National Intelligence, 2016).

Russian government hackers also systematically scanned election-related state infrastructure in likely all fifty states and breached the systems of at least two of them.Footnote 18 The hackers were able to access voter registration data, and while there is no evidence that the hackers altered data, their access would have allowed them to do so. The hackers also researched voting systems and other aspects of electoral infrastructure, though there is no evidence that any such systems were compromised in 2016.

In August 2016, DHS Secretary Johnson hosted a conference call with state election officials to “to discuss the cybersecurity of the election infrastructure” (Department of Homeland Security, 2016). He told the officials that while “DHS is not aware of any specific or credible cybersecurity threats relating to the upcoming general election systems,” he offered DHS assistance in helping state officials manage risks to voting systems in each state’s jurisdiction. He also mentioned the possibility of designating electoral infrastructure as critical infrastructure, which would enable DHS to prioritize the sector for support and establish protected channels for information exchange.

The reaction to such a designation from officials who spoke out during the meeting, Secretary Johnson said later, “ranged from neutral to negative,” with those expressing negative views asserting that administering elections was their responsibility, not the federal government’s (Department of Homeland Security, 2016; Johnson, Reference Johnson2017). Later in August, the FBI issued an alert about malicious activity targeting election-related state infrastructure, and in September, Congressional leaders issued a bipartisan plea to state officials to accept DHS support. Forty-nine states requested technical assistance, which DHS delivered primarily in the form of remote scans of internet-facing election-related state infrastructure (Government Accountability Office, 2020).

In December 2016, Brian Kemp, then-secretary of state for the State of Georgia, sent letters to the DHS and President-elect Trump alleging that the DHS had scanned Georgia’s voter registration systems as a precursor to attacking them and asking Trump to investigate the matter upon taking office (Fulghum, Reference Fulghum2017). Secretary Johnson’s response days later explained that there was no scan, and that the traffic described in Kemp’s letter was from a DHS contractor at the Federal Law Enforcement Training Center (FLETC) conducting research on “whether incoming FLETC contractors and new employees had a certain type of professional license – a service that, as I understand it, your website provides to the general public” (Fulghum, Reference Fulghum2017). He further explained that “the technical information we have corroborated [this explanation], and indicates normal Microsoft Internet Explorer interaction by the contractor’s computer with your website” (Fulghum, Reference Fulghum2017). DHS Congressman Jason Chaffetz, Chairman of the House Committee on Oversight and Reform, followed with a letter to DHS complaining that Secretary Johnson’s written response and subsequent briefings from DHS staff “did not provide adequate information to verify or validate” this explanation; he requested that relevant records be preserved and that the DHS inspector general undertake an investigation. In June 2017, the inspector general notified Congress that Kemp’s allegations were “unsubstantiated” and that “the activity Georgia noted on its computer networks was the result of normal and automatic computer message exchanges generated by the Microsoft applications involved” (Department of Homeland Security, 2017).

The bitterness of the 2016 election, with the incumbent administration warning that Russia had interfered in the election in support of the opposition’s candidate, Donald J. Trump, and the opposition’s distrust of the incumbent administration’s motives as well as baseless claims throughout the election by Trump that the election was “rigged” against him, set the stage for post-2016 efforts to address cyber threats to electoral infrastructure.

Measures Taken between 2017 and 2020

The period between 2017 and 2020 – with the Congressional midterm elections in 2018 and the 2020 presidential contest – were an especially sensitive moment for American democracy, especially debates around electoral integrity and the resilience of election-related infrastructure. The 2016 election highlighted that cynical partisans were not the only potential threat to electoral integrity: Foreign governments also had reason to intervene. That the foreign government’s preferred candidate, Donald J. Trump, won the presidential contest raised the uncomfortable question of the extent to which Russia’s efforts contributed to it. His own linkages to Russia – including his public invitation in the summer of 2016 for Russia to find and presumably release deleted emails from Hillary Clinton’s private email server – and the Department of Justice (DOJ) inquiry into those linkages further sharpened the potential ideological stakes of whether and how to make election infrastructure more resilient against digital threats. In particular, the Trump administration, facing a series of threats to its legitimacy built in part on the vulnerability of electoral infrastructure to digital threats, could have denied or played down the existence of the vulnerability in policy. The fact that it didn’t is fairly remarkable.

Ten weeks would elapse between election day on November 7, 2016 and inauguration on January 20, 2017. On December 28, 2016, President Obama signed an Executive Order (EO) 13757, “Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities,” to authorize economic sanctions against actors involved in “tampering with, altering, or causing a misappropriation of information with the purpose or effect of interfering with or undermining election processes or institutions” and sanctioned Russian organizations and individuals in connection with the 2016 election interference. It also expelled Russian intelligence officers operating under diplomatic cover from the United States and forced the closure of two Russian diplomatic facilities. In January 2017, Secretary Johnson designated electoral infrastructure as critical infrastructure. In addition to enabling the DHS to forge a deeper partnership with the sector on security issues, the designation also signaled to international partners and adversaries the sensitivity of electoral infrastructure from a national security perspective. For example, the United States had been engaged in diplomatic deliberations over norms of behavior in cyberspace during peacetime and was advocating inter alia for a norm that states do not interfere with critical infrastructure during peacetime.

When Trump took the oath of office on January 20, 2017, he inherited a policy trajectory from the Obama administration of trying to hold Russia accountable for its actions while deterring Russia or any other country from running a similar play as Russia in future elections. He rejected the assessment that Russia had intervened in the election to support him, however, and instead focused his attention on alleged voter fraud. The threat actor he had in mind was not a foreign government such as Russia or even a cyberthreat actor, but a person who is ineligible to vote that nevertheless casts a vote.Footnote 19 Such instances are exceedingly rare – in the 2016 election, for example, the Associated Press identified “fewer than 475 – a number that would have made no difference in the 2020 presidential election (Cassidy, Reference Cassidy2021). In May 2017, however, he signed an EO to launch a “Presidential Advisory Commission on Election Integrity” charged with identifying and rooting out fraudulent or improper voting and voting registration (Trump White House Archives, 2017). Not surprisingly, the commission had uncovered no evidence of widespread voter fraud when it disbanded in January 2018.

Inside the bureaucracy of the Trump administration, however, agencies such as the DHS and the FBI were positioning themselves as allies of state election officials in those officials’ efforts to prepare for the 2018 midterm elections and later the 2020 presidential elections. The DHS and the FBI hosted briefings with state election officials on threats to election integrity and how to build resilience against them (Department of Homeland Security, 2018), and in March 2018, the DHS announced the launch of the Elections Infrastructure Information Sharing and Analysis Center (EI-ISAC) to coordinate the sharing and exchange of cyber threat information among election officials and organizations (Center for Internet Security, n.d.). In addition, the designation of electoral infrastructure as critical infrastructure enabled DHS to convene a Government Coordinating Council (GCC) and a Sector Coordinating Council (SCC) to facilitate information sharing among public and nongovernmental actors engaged in election administration. In September 2018, President Trump signed EO 13848, “Imposing Certain Sanctions in the Event of Foreign Interference in a United States Election” (Trump, Reference Trump2018). The EO directs the DNI to prepare a report within forty-five days of an election on whether foreign governments interfered with it; the EO also directs that DOJ and DHS prepare a report within forty-five days of the DNI’s report on whether the foreign interference affected the election outcomes. The order also authorized sanctions and other measures against actors found to have interfered with a US election.

Congress, meanwhile, had debated multiple major legislative proposalsFootnote 20 and three passed both houses: the consolidated appropriations acts of 2018Footnote 21 (Royce, Reference Royce2018) and 2020 (U.S. Government Publishing Office, 2019) appropriated $380 million and $425 million, respectively, to the EAC to give to states to improve election technology and security, and the Coronavirus Aid, Relief, and Economic Security Act (CARES Act) included $400 million in HAVA emergency funds for the 2020 federal election cycle. The 2018 funding came too late to contribute much to the 2018 Congressional midterms, but by September 2020, states had spent nearly 94 percent of the 2018 funding, with thirty-five states spending their allocation completely and another thirteen spending above 90 percent of theirs (U.S. Election Assistance Commission, 2021). The CARES Act funding could only be used in direct connection with helping states manage elections in the context of a pandemic, though some states used the funding to purchase new enterprise IT in support of remote work, which could have some modest cybersecurity benefit as newer IT tends to be easier to secure than older IT (U.S. Election Assistance Commission, n.d.).

In October, the heads of the DHS, the DOJ, the FBI, and the Office of the DNI jointly warned of “ongoing campaigns by Russia, China and other foreign actors, including Iran,” to influence elections as well as attempted intrusions into state election-related infrastructure, though in the latter instance, the heads explained, “[i]ncreased intelligence and information sharing among federal, state and local partners has improved our awareness of ongoing and persistent threats to election infrastructure” and there was no evidence of a successful breach (Office of the Director of National Intelligence, n.d.).

A joint statement on November 5 from the same group took on a reassuring tone and described the government’s efforts to guard against digital threats to election infrastructure as “unprecedented”:

Our agencies have been making preparations for nearly two years in advance of these elections and are closely engaged with officials on the ground to help them ensure the voting process is secure. Americans can rest assured that we will continue to stay focused on this mission long after polls have closed.

(Office of the Director of National Intelligence, 2018)

The agency heads warned voters to be vigilant, especially in the face of foreign influence campaigns designed to shape “public sentiment and voter perceptions … by spreading false information about political processes and candidates, lying about their own interference activities, disseminating propaganda on social media, and through other tactics.” Voters should seek ground truth about elections and election processes by contacting their local election organizations and be cautious consumers of information.

After the election, news reports surfaced that the U.S. Cyber Command had carried out cyberattacks against the Russia-based Internet Research Agency (IRA) to shut it down during the election. The IRA was a notorious “troll farm” engaged in propaganda operations, often using social media. The IRA, its leader Yvgeniy Prigozhin, and others connected with it had previously been indicted by the DOJ on criminal charges stemming from the IRA’s interference in the 2016 election,Footnote 22 and in September 2018 the DOJ indicted a Russian accused of overseeing Project Lakhta, the Russian codename for a propaganda operation targeting the United States and other countries, carried out in part through the IRA. The goal of the attack, according to this reporting, was to keep the IRA out of commission from election day until election results were certified (Barnes, Reference Barnes2019). The attack was reportedly part of a broader campaign by US military and intelligence organizations to interfere with the ability of foreign actors, especially Russia, to interfere in the midterm elections (Barnes, Reference Barnes2018).

The DNI submitted the Intelligence Community (IC)’s required report under EO 13848 on foreign interference in the 2018 midterm elections to the President on December 21. DNI Coats’ public statement about the classified report said that “the Intelligence Community does not have intelligence reporting that indicates any compromise of our nation’s election infrastructure that would have prevented voting, changed vote counts, or disrupted the ability to tally votes.” In their public comments about the required follow-on report, which was classified, the Attorney General and DHS Secretary said there was “no evidence to date that any identified activities of a foreign government or foreign agent had a material impact on the integrity or security of election infrastructure or political/campaign infrastructure used in the 2018 midterm elections” (U.S. Department of Justice, 2019).

These efforts to enhance the resilience of electoral infrastructure and disrupt adversaries’ operations continued into the 2020 election cycle, with similar results, at least as far as foreign interference goes. The DNI submitted a classified report to the president in January 2021 pursuant to EO 13848; in March, the Biden administration released a declassified version. According to that report, the USIC found “no indications that any foreign actor attempted to alter any technical aspect of the voting process in the 2020 US elections, including voter registration, casting ballots, vote tabulation, or reporting results.” It also “identified some successful compromises of state and local government networks prior to Election Day – as well as a higher volume of unsuccessful attempts – that we assess were not directed at altering election processes” (Director of National Intelligence, 2021). The follow-on report from the DOJ and DHS echoed the DNI, reporting that there was “no evidence that any foreign government-affiliated actor prevented voting, changed votes, or disrupted the ability to tally votes or to transmit election results in a timely manner; altered any technical aspect of the voting process; or otherwise compromised the integrity of voter registration information of any ballots cast during 2020 federal elections” (U.S. Department of Justice, 2021, p. 2) Both reports, however, highlighted the growing range of foreign actors engaged in digitally enabled influence operations against the United States.

President Trump lost his reelection bid to former Vice President Joseph Biden in 2020. He claimed without evidence, however, that the election was stolen from him because of widespread voter fraud. He denigrated the results of the election as illegitimate and sought ways to stay in power, despite the election’s outcome. In contrast, the DHS issued a joint statement from the elections infrastructure GCC and SCC nine days after the election describing it as the “most secure in American history” and said “[t]here is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised” (Cybersecurity & Infrastructure Security Agency, 2020).

Conclusion

Though cybersecurity of elections is a recent concern, hacking elections is not. The relatively limited constitutional discipline on elections and the fact that the constitution distributes responsibility for administering elections to state and local governments means that the attack surface of America’s election infrastructure for would-be hackers is large. Technology has added a new dimension to this attack surface in the form of digital technologies that facilitate tasks ranging from voter registration to vote casting and vote counting. Digital technologies are prevalent throughout electoral infrastructure – which technologists were quick to identify risks to voting systems in the years following enactment of HAVA in 2002 – but the mainstream cybersecurity policy community did not consider these risks a priority on par with the risks to critical infrastructure (which at the time didn’t include state election-related infrastructure) and economic competitiveness posed by foreign military and intelligence services.

The digitalization of election infrastructure has lowered one of the barriers to entry for threat actors seeking to disrupt democratic processes: Before then, a hacker usually needed partisan allies to carry out an attack. Gerrymandering, poll taxes, even organized violence at polling places all require partisan allies. Hacks like these are, in a sense, inside jobs that rely on willing partners and a complacent public. The digitalization of electoral infrastructure makes it possible for outside actors to potentially hack elections: A lone wolf, criminal group, or foreign government can potentially interfere with elections on their own.

The 2016 presidential election pulled election security into the mainstream of cyber policy, with the DHS, FBI, and other federal agencies investing significant effort in building partnerships with state election officials and taking direct action against suspected threat actors with criminal indictments and economic sanctions. The partnership-building efforts with state officials after the designation in January 2017 of election systems as critical infrastructure created new information sharing channels and trust relationships that facilitated the ability of the federal government to provide security and other assistance. That these efforts in the Trump administration thrived despite President Trump’s open hostility toward the notion that Russia interfered in the 2016 election to support his candidacy speaks to the ability of the agencies and their partners in state and local government to pursue their efforts as a technocratic versus political or partisan initiative. President Trump’s benign neglect of his administration’s efforts to protect election infrastructure from digital threats ended when he fired Chris Krebs, head of the Cybersecurity and Infrastructure Security Agency (CISA), which leads federal government cybersecurity initiatives for critical infrastructure (including election infrastructure), for claiming that the 2020 election had been the most secure in the history of the country. Trump did not quibble with claims about the cybersecurity of the election against foreign interference; his cavil with Krebs and with others who touted the cybersecurity of the election was that it conflicted with his claim that the election was stolen by the Democrats.

The hack of the DNC and breach of election-related infrastructure by Russian government-directed actors in 2016 elevated concerns regarding the cybersecurity of election infrastructure. In the immediate aftermath of the foreign interference in the 2016 election, then President Obama signed an EO authorizing economic sanctions against actors seeking to interfere with or undermine electoral processes or institutions. Shortly thereafter the DHS designated electoral infrastructure as critical infrastructure, notably signaling to allies and adversaries alike the acute sensitivity and importance of election infrastructure to national security.

Despite partisan differences, noteworthy strides have been made to counter continued attempts at undermining US elections and election infrastructure. The passage of major legislative proposals such as the Consolidated Appropriations Acts of 2018 and 2020, as well as the CARES Act, have directed funds toward further improving election technology and security.

Following the 2016 election, a noticeable shift was made toward paper ballots to increase election security. In the 2020 election, it is estimated that 93 percent, up from 82 percent in 2016, of all votes cast had a paper record. This uptick resulted from states and local jurisdictions replacing outdated and vulnerable paperless voting machines (i.e., DRE machines). Paper-based systems are better for security because they create a paper record that can be reviewed by election officials in postelection audits. Further measures such as postelection audits – a process that enables states to verify the accuracy of voting equipment and counting machines – act as additional defenses against election interference. As of 2023, forty-five states have mandated some form of postelection audit, up from thirty-five in 2016 (U.S. Election Assistance Commission, 2023).

These measures have paid off. Reports on the 2020 elections from the DOJ, DHS, and DNI noted that while an increasing number of foreign actors had engaged in influence operations against the United States, there was no evidence that these foreign actors were able to compromise the integrity of election infrastructure during the 2020 federal elections. The same is true for domestic actors: There is no evidence of successful election-related hacking in 2020 by partisans or other politically motivated actors.

As the saying goes, however, past performance does not guarantee future results: Some states still lack some form of paper trail for ballots cast and some form of postelection audit. All states should eliminate paperless voting machines and implement postelection audits. In addition, as the epidemic of ransomware attacks against municipalities and other targets shows, vulnerabilities remain. Owners and operators of election infrastructure, from local governments to candidate campaign operations, must make cybersecurity a core priority for risk management. This prioritization must come from the top: Senior leaders involved in election administration must make cybersecurity a priority, ensure that resources are devoted to it, and hold themselves and their teams accountable for managing cyber risks. Fortunately, owners and operators have resources they can turn to for guidance and support. For example, the CISA maintains a “Cybersecurity Toolkit and Resources to Protect Elections” website with guidance on best practices and free or reduced-price cybersecurity tools from leading vendors (Cybersecurity & Infrastructure Security Agency, 2024).

History teaches us further that cyber risks to election integrity, though real, cannot (perhaps yet) shine a candle to the myriad other ways that intrepid hackers have sought to subvert democracy. An all-hazards approach to election integrity is warranted.

3 From Free Speech to False Speech Analyzing Foreign Online Disinformation Campaigns Targeting Democracies

Introduction

Popular technology platforms enable users to create an account, sometimes anonymously or with minimal identity verification. Users can post and share content instantaneously to networks online. As this user-generated content is available immediately and not fact-checked first, social media is ripe for manipulation from disinformation. By the time disinformation campaigns are detected, they may have propagated rampant misinformation – unintentionally false information – as real users share untrue content they see and believe to be accurate with their networks.

Online disinformation efforts pose challenges for detection and mitigation. Technology companies, governments, and civil society struggle to promptly and adequately respond to ongoing threats to the information environment. Governments can be disinformation campaigns’ culprits within their own countries; look no further than New York Times reports of US President Donald Trump’s disinformation-perpetuating commentary on mail-in voting leading up to the 2020 election (Sanger & Kanno-Youngs, Reference Sanger and Kanno-Youngs2020; Stolberg & Weiland, Reference Stolberg and Weiland2020). However, this chapter’s scope focuses on Russian and Chinese campaigns rather than within-country campaigns.

Disinformation campaigns can drive people to reinforce confirmation biases and spread fabricated information they believe is accurate. Today’s vast online information environment bombards users with headlines and makes it challenging to discern trustworthiness. This information overload can benefit education and knowledge accessibility. However, it also allows actors to manipulate social media for their agendas.

States can exploit this environment by using social media’s algorithms and capabilities to spread disinformation. Bots and other technologies offer cheap, scalable, automated ways to amplify coordinated campaigns. These efforts intentionally center on existing societal divides in the targeted countries. Such divisions include the climate crisis, racial strife, and democratic processes. The latter is this study’s focus.

Research Questions and Methodology

This research discusses the extent and how Moscow and Beijing employ online disinformation campaigns to exploit the vulnerabilities of democracies and advance strategic aims. At the heart of this research is an assessment of the chief similarities and differences in tactics identified through dyads between Russia, China, and specific democracies.

The methodology employs a review of public reports, archives, and news content coupled with contextualized case studies. This approach centers on Russia and China as the primary propagators of online disinformation against democracies. We focus on well-researched, major democracies to achieve a global perspective of countries affected, although we acknowledge there are other, often less-studied examples, including democracies in developing countries. Each case study incorporates varying examples of global disinformation attacks credibly attributed to originating countries.

Limitations

Limitations stem from the difficulty of studying something inherently involving deception. The internet’s ability to provide anonymity presents challenges in identifying disinformation’s exact origin and extent. Aspects of social networks can obfuscate whether untrue content is ill-informed misinformation or deliberate disinformation. This work focuses on coordinated, professionalized examples of disinformation against democracies attributed to authoritarian states. Many authoritarian states’ aims remain covert; thus, we look to history, current events, and scholarly experts to analyze authoritarian goals when official statements are unavailable. Another challenge to disinformation is accurately measuring reach and impact, including the outcomes of a disinformation-heavy article or tweet on voter preferences. Confounding factors influence users’ views; therefore, we look to available social media data for best estimations.

Democracy and Authoritarian Threat

Inherent in the research is the belief democracies need to increase their resilience against foreign governments’ efforts to exert influence through disinformation and acknowledgment of disinformation’s complexity and international prevalence. Disinformation is not one-sided from authoritarian regimes to democracies but can also emerge within democracies, including sometimes targeting authoritarian states. We also recognize differences between stable, established democracies and emerging, vulnerable, or crisis-ridden democracies.

Russia and China present two intriguing case studies of authoritarian regimes deploying disinformation against democracies for multiple reasons. Russia’s President Vladimir Putin and China’s President Xi share a rapport engendering security cooperation and “remarkably frequent” engagements supporting a “growing agreement about how the world should be” (Kendall-Taylor & Shullman, Reference Kendall-Taylor and Shullman2018, p. 1). They share an understanding that eroding democracy will contribute to their mutual aims, including weakened Western power. Later sections present core similarities and differences between known Russian and Chinese disinformation campaign strategies aimed at democracies.

Many influence operations are at least partially covert, notably aided by the internet’s potential for anonymity and globalization (U.S. Department of Homeland Security, 2020; Wray, Reference Wray2020). A frequent tactic of influence operations is adversaries creating artificial personas and fabricated narratives to denigrate democratic institutions (Wray, Reference Wray2020). Despite most disinformation’s covert nature, reputable sources, such as official government documents, international institutions, and fact-checked nonpartisan media, linked examples discussed to the Russian or Chinese government. While this research focuses on governments’ foreign influence campaigns, non-state actors are also responsible for initiating disinformation campaigns. Including non-state operations would become unwieldy; thus, we remain focused on efforts judged to be launched, backed, or otherwise supported by governments.

Definitions: Misinformation and Disinformation

While disinformation and misinformation are both dangerous and deepen societal divides, the critical difference is intent. This research is concerned with disinformation – sharing false information in intentional efforts to harm or deceive. Misinformation is the “spreading of unintentionally false information” (Theohary, Reference Theohary2018, p. 5). Misinformation can manifest as internet users sharing hoaxes or conspiracy theories they believe are authentic.

Typical examples of disinformation include doctored photos, inaccurate news articles, or tampered official documents purposely planted online. These instances occur within the information environment, which encompasses systems and actors “collect[ing], disseminat[ing], or act[ing] on information” (Theohary, Reference Theohary2018, p. 5). Disinformation in international relations is a pervasive challenge, especially given the internet’s potential for anonymity, low entry cost, and global spread.

Analyzing Online Disinformation from Russia and China

Russia and China are two powers seeking to challenge or undermine norms they view as dominated by the West. Where publications sometimes focus on one or the other, this research analyzes their distinct aims side by side in deploying disinformation against democracies. While there is ample research on historical disinformation from the Soviet Union during the Cold War and its aftermath, renewing this study after Putin’s presidency began in 2000 is helpful. Likewise, President Xi rose to power in China in 2012 and implemented bold breaches of international norms and law, including cyberattacks (White House, 2015). Coupled with China’s economic and geostrategic power, these influence campaigns make valuable case studies.

Despite authoritarian similarities between Russian and Chinese disinformation targets and strategies, there are distinct contrasts. One difference is Russia’s longstanding tradition of implementing international disinformation campaigns that precede the internet’s advent. In comparison, China’s power and participation in the global disinformation landscape have grown in recent years (Bradshaw & Howard, Reference Bradshaw and Howard2019; Wray, Reference Wray2020). This chapter creates an analytical framework to map authoritarian regimes’ foreign influence efforts against democracies: (1) through longer-term, ambient disinformation, (2) during transitions of political power, and (3) during social and cultural divides. This framework drives our analysis.

Russian Online Disinformation Campaigns Targeting Democracies
Russian Disinformation’s Strategic Aims

Moscow’s aims and global positioning as an authoritarian geostrategic power help us contextualize Russian disinformation. Because Russian disinformation provides some of the most well-known and credibly attributed examples of foreign interference, Kalathil (Reference Kalathil2020, p. 36) suggests the Russian state serves as a “model for other authoritarians’ efforts.” Evidence of Russian online disinformation affecting democracies across Europe, North America, and elsewhere confirms the reach of Kremlin-initiated interference campaigns.

Bernstein (Reference Bernstein2011) calls Russia a “‘hybrid’ authoritarian regime” under Presidents Putin and Medvedev because of the existence of some democratic institutions, including multiparty elections and competitive parties. These democratic institutions are restricted yet noteworthy. In theory, these institutions’ existence should indicate that democracy’s norms are accepted and a possibility – if meager under Putin’s regime – of movement toward a more complete democracy.

Russia’s strategy has long incorporated influence operations, with the Kremlin-supported Internet Research Agency (IRA) beginning as an effort to exert influence within Russia (Pomerantsev, Reference Pomerantsev2019). Turning to Russia’s foreign targets, this research concentrates on two categories: geographically close democracies and Western democracies. Russia deployed strategic efforts externally with post-Soviet “near abroad” targets of Estonia, Georgia, Latvia, Moldova, Lithuania, and especially Ukraine (Kalathil, Reference Kalathil2020; Sukhankin, Reference Sukhankin2019). Because Russia assesses its might compared to the United States and other powers, the Kremlin also targeted Western democracies. Weakened democracy, particularly in the West, helps increase Russia’s perception of its power and influence.

Kendall-Taylor and Shullman (Reference Kendall-Taylor and Shullman2018) describe Russia’s global strategy as “confrontational and brazen.” The Kremlin’s assault on democratic institutions spans foreign interference in elections, disinformation campaigns, and corruption to erode global commitment to democracy. This research focuses on the disinformation component while recognizing disinformation as one method within the broader Russian foreign influence approach. Even within disinformation, multiple strands of influence emerge. While some understand disinformation as only overt falsehoods, this work asserts that disinformation is a complex challenge for democracies, including tactics such as the Russian military intelligence (GRU) use of narrative or information laundering and boosterism, as supported by DiResta and Grossman (Reference DiResta and Grossman2019). Narrative laundering is when “a story is planted or created and then legitimized through repetition or a citation chain across other media entities,” and boosterism is repetitious content “created and disseminated to reinforce the perception that a given position represents a popular point of view” (DiResta & Grossman, Reference DiResta and Grossman2019, p. 5). These tactics rest at the heart of the Kremlin’s foreign influence efforts.

The U.S. Department of State’s Global Engagement Center (GEC, 2020) organizes Russia’s disinformation strategy into five pillars. Those pillars are official government communications, state-funded global messaging, cyber-enabled disinformation, proxy source cultivation, and social media weaponization (GEC, 2020). These activities’ explicit connection to the Russian government ranges from clear-cut, such as official government communications, to more covert strategies, such as cyber-enabled forgeries or cloned websites. By employing a combination of overt and covert tactics, the Kremlin’s penetration of the information environment is multifold for increased effect.

Analytical Framework Applied to Russian Campaigns

Considering Russia’s aims, this section studies Russian online foreign influence in the form of ambient disinformation campaigns, exploitation of social divides, and interference during transitions of power. Ongoing, ambient disinformation plays a critical role in this strategy for amplifying falsehoods’ reach. With continuous foreign disinformation, a divided democracy can become a less powerful threat with reduced capacity to withstand further attacks, and this diffusive disinformation strategy can create optimal conditions for more targeted disinformation efforts (Beskow & Carley, Reference Beskow and Carley2019). The Kremlin’s strategy capitalizes on these advantages by employing long-term, ambient disinformation campaigns against democracies between more targeted information attacks.

Military leaders denote the movement from isolated periods of conflict to persistent information operations from authoritarian powers in the international system. Current warfare is not always declared but is undeniably already happening (Beskow & Carley, Reference Beskow and Carley2019; Gerasimov, Reference Gerasimov2016). Thus, democracies must be perpetually vigilant about widespread disinformation. The continual nature of Russian disinformation necessitates planning and foresight for successful long-term influence efforts. Researchers suggest the Kremlin organized disinformation activities on social networks far before notable moments, including elections (Kalathil, Reference Kalathil2020; Starks, Cerulus, & Scott, Reference Starks, Cerulus and Scott2019). The long-term preparation allows for increasingly effective targeted attacks during moments of particular vulnerabilities. While the exact extent is difficult to quantify, ongoing campaigns continually seek to decay democratic institutions and progress authoritarian vigor.

Influencing Transitions of Political Power

The Kremlin leverages disinformation to influence transitions of political power in the West and neighboring countries, such as during the 2014 Ukrainian Maidan Revolution and the 2016 US presidential election. Russia increased its influence efforts targeting its post-Soviet “near abroad” since the early 2000s, leading to escalation into the Russia–Ukraine war in 2022. This section turns to the 2014 Ukrainian Maidan Revolution to analyze Russian interference during political transitions as part of broader aims still unfolding today.

The 2014 Ukrainian Maidan Revolution

Moscow’s efforts to influence nearby post-Soviet countries since the early 2000s include foreign influence campaigns during countries’ political transitions. Ukraine is a geographically close democratic target with evident strategic importance to Russia. There is much to learn about the extent of Russian foreign interference across time, including the Kremlin’s 2014 to 2016 interference in Ukraine and social media’s ongoing role in the Russia–Ukraine conflict.

The 2014 Ukrainian Maidan Revolution was a period of civil unrest and demonstrations in Ukraine, beginning with protests in late 2013 when the Ukrainian government chose not to sign the “Association Agreement and the Deep and Comprehensive Free Trade Agreement with the European Union” (Federal Department of Foreign Affairs, 2020). This protest for greater European integration grew into the removal of Ukraine’s pro-Russian President Viktor Yanukovych and unrest over ongoing corruption, human rights violations, and abuses of power (Federal Department of Foreign Affairs, 2020). In response to President Yanukovych’s ousting, the Kremlin organized attempts to delegitimize Kyiv’s new government and Ukraine’s growing inclination to the West through disinformation, proxy war, and militaristic aggression (Sokol, Reference Sokol2019). Through disinformation-heavy influence campaigns, Putin painted adversaries as detesting Jews and the new government as a “fascist junta” spreading antisemitism, racism, and xenophobia (Sokol, Reference Sokol2019, para. 3). This disinformation tactic was significant – and effective – because of World War II’s lasting impact throughout the post-Soviet arena.

The Russian disinformation effort to frame Ukraine’s post-revolution leaders as racist and violent occurred at a similar time to when Russia was aiming to conquer the Crimean Peninsula. This invasion of Ukrainian territory and the Crimean annexation began as a Russian state-sponsored media campaign targeting Ukraine (Summers, Reference Summers2017). Russian disinformation efforts inundated Crimeans with narratives that they were at risk from their fellow people in Kyiv (Yuhas, Reference Yuhas2014). This disinformation led many Crimeans to welcome Russian military presence’s perceived protection. Disinformation efforts worked to overwhelm the information environment and advance the Kremlin’s aims by distracting from their actions and negatively framing others’ activities.

A component of this and other Russian campaigns is the ability to morph actual events into a new, carefully crafted narrative removed from reality – but rooted in it and recontextualized. An example is the assertion that Jewish people felt forced to leave Ukraine. While tens of thousands left Ukraine for Israel, as reported by Israeli interior ministry statistics, the departure’s motivation was reportedly the danger felt due to Russian aggression, not racism as Russia sought to contrive (Sokol, Reference Sokol2019). Russia maximized the opportunity to frame the exodus as Ukraine’s doing and leveraged cyber-enabled attacks to create a perception of chaos and instability within Ukraine (Summers, Reference Summers2017). Beyond the outside perception of Ukraine, the efforts contributed to Ukrainians’ eroding faith in their government and overall unity.

Evidence of the Kremlin propagating anti-Western, pro-Russian content targeting Ukrainians across social media spans years (Chen, Reference Chen2015; Summers, Reference Summers2017). The strategically crafted messages from Russia across time and networks showcase the Kremlin’s attempts to grow Ukrainian support for Russian interests (Chen, Reference Chen2015). While it remains a democracy’s responsibility to address public opinion within its citizenry, acknowledgment of Russian interference’s role in shaping Ukrainians’ perceptions is critical.

The 2016 US Presidential Election

The Kremlin’s foreign interference attempts are not limited to nearby post-Soviet countries. A highly publicized context of Russian influence campaigns during political transitions is the 2016 US presidential election. Russian interference in the 2016 US presidential election was confirmed by official US government agency investigations, most notably an extensive bipartisan U.S. Senate Intelligence Committee report. Moscow employed a combination of disinformation and hacking intended to push the 2016 US presidential election in favor of Trump (Funke, Reference Funke2020). Trump became the eventual winner, although assessing how much Russia influenced the outcome from open-source information is challenging. This chapter is less concerned with disinformation’s effectiveness as it is with the extent authoritarians deploy campaigns in attempts to influence democracies.

TIME’s Shuster (Reference Shuster2020) details how Russia initiated a two-pronged attack centered on spreading online disinformation and incorporating cyberattacks to hack the Democratic Party and election infrastructure. Compared to 2020, the 2016 US presidential election saw more invasive interference with a focus on influence through state-run media and social media disinformation (Shuster, Reference Shuster2020; Wray, Reference Wray2020). Experts, including the Alliance for Securing Democracy’s Bret Schafer, contend that Russia’s interference in the 2016 election was stronger than in 2020 because social networks had yet to improve the removal of disinformation-spreading Russian bots and fake accounts (Shuster, Reference Shuster2020). Between 2016 and 2020, American discourse became increasingly divided and fraught with false information and conspiracies originating domestically; thus, Russian efforts in 2016 sowed lasting seeds but arguably did not need to continue amplifying the disinformation already in motion and impacting outcomes (Shuster, Reference Shuster2020).

The U.S. Senate Intelligence Committee shared its final report on Russian influence in the 2016 presidential election in 2020. The Hill Staff (2020) published that the report was similar to the findings of the investigation by special counsel Robert Mueller with “overwhelming evidence of Russia’s efforts to interfere in the election through disinformation and cyber campaigns but … a lack of sufficient evidence that the Trump campaign conspired with the Kremlin to impact the outcome of the 2016 election.” Regarding Moscow’s influence in the US 2016 election and Donald Trump’s involvement, Slate’s Stahl (Reference Stahl2020) argues “at the very least, the Trump team was aware of and welcomed this meddling.” In-country complicity in foreign disinformation is a complicating factor, as most politicians benefiting from disinformation may be disinclined to intervene or discredit it.

Russian involvement in the 2016 election appears motivated by aims to “tarnish U.S. democracy” and enable Moscow’s assertions “Washington has no right telling other nations how to conduct their elections” (Kendall-Taylor & Shullman, Reference Kendall-Taylor and Shullman2018, p. 1). Such foreign influence can undermine legitimate democratic processes as people question if events are authentic or resulting from foreign influence. While it is challenging to quantify disinformation campaigns’ outcomes, it is undeniable that disinformation’s hundreds of millions of social media impressions had lasting effects (Al Jazeera English, 2018). Even in the years-later aftermath following Trump’s defeat in the 2020 election, 2016’s disinformation impacted US sentiments, political views, and rhetoric, including growing distrust and challenges to democratically decided outcomes.

This assessment of Ukraine’s 2014 Maidan Revolution and the 2016 US presidential election analyzes Russian disinformation attempting to exploit transitions of political power. These political transitions often include social fractures, and a disillusioned citizenry is vulnerable to adversaries’ efforts. The following section focuses on periods of social and cultural divides as democratic vulnerabilities offering powerful opportunities for foreign influence.

Social and Cultural Divides

Social and cultural divides provide periods of vulnerability where adversaries may strike on democracies’ existing societal cleavages. These tensions include those between races, religions, political parties, or a country’s people and its military (Beskow & Carley, Reference Beskow and Carley2019). This section analyzes efforts to capitalize on democracies’ social and cultural divides through the Brexit referendum.

Brexit Referendum

An example of alleged Russian interference through disinformation is the United Kingdom European Union referendum, also known as the Brexit referendum in 2016. The UK’s Intelligence and Security Committee of Parliament (ISC) investigated Russian interference targeting this social divide (Dearden, Reference Dearden2020). The ISC report uncovered insufficient evidence “Russia meddled to any significant extent … nor did Russia influence the final outcome,” although it found UK intelligence neglected Russian threats (Castle, Reference Castle2020, para. 6). The ISC report was complete long before its release, which was delayed until after the general election (Castle, Reference Castle2020; Ellehuus & Ruy, Reference Ellehuus and Ruy2020).

The disinformation-spreading Strategic Culture Foundation (SCF) published “editor’s choice” articles, such as “The Russian Brexit Plot That Wasn’t,” to discredit evidence of Russian interference (Robinson, Reference Robinson2020, p. 1). While the extent of Russian disinformation was determined insufficient to sway results, leadership reportedly choosing not to investigate evidence of interference sooner provided a window of opportunity for Russia to bolster its campaigns. Indications of Russian interference in the British government predate the Brexit referendum, including Russian permeation in London, with Castle (Reference Castle2020) referring to the capital as “Londongrad.” While Russian interference in the Brexit referendum likely did not alter the vote’s outcome, its extent is nonetheless concerning, as is the lack of democratic safeguards from interference in Britain and elsewhere.

Chinese Online Disinformation Campaigns Targeting Democracies
Chinese Disinformation’s Strategic Aims

Despite authoritarian similarities to Russia, China’s strategy maintains unique qualities, making it an attractive second case study. The Chinese Communist Party (CCP) is well-known for its propaganda, censorship, and disinformation (Blumenthal & Zhang, Reference Blumenthal and Zhang2020). China’s political leaders deny any chance of movement to democracy and favor authoritarian norms. However, China’s authoritarian leadership maintains both soft and hard aspects along with self-branding as a “consultative democracy” (Bernstein, Reference Bernstein2011; Kim, Reference Kim2019). While China was previously content to coexist with democracies, its growing power brings a willingness to challenge democracy’s broader norms.

The CCP acknowledged that China’s military and economic power was insufficient to achieve its aims and turned to increased information warfare through disinformation (Cole, Reference Cole2019). President Xi lionized “discourse power” to influence narratives and share China’s story on the global stage (Rosenberger, Reference Rosenberger2020, para. 9). Under President Xi, China created a narrative of democracy to grant their political system more “ideological legitimacy” in mainland China, Taiwan, and Hong Kong – and among other communities watching (Kim, Reference Kim2019, para. 8). There has been a marked increase in the CCP’s investment in technologies and its disinformation efforts’ intensity in positively branding China.

China increasingly prioritizes the global race for technological advancement. Citing security risks, the United States restricted Chinese companies Huawei and ZTE and encouraged European restriction of their reach (Bartz & Alper, Reference Bartz and Alper2022; Mukherjee & Soderpalm, Reference Mukherjee and Soderpalm2020; Shepardson, Reference Shepardson2021). Amid this technological power struggle, the CCP expanded its funding for the China Standards 2035 and Made in China 2025 initiatives, aiming to minimize reliance on the United States and promote domestic products (Sinkkonen & Lassila, Reference Sinkkonen and Lassila2020).

In the State Council Information Office’s White Paper on China’s Political Party System, the CCP shared “socialist democracy” branding, combining consultative and electoral democracy (Kim, Reference Kim2019, para. 2; USC US-China Institute, 2007). China’s vulnerability to criticism spurred leaders to create an image of a democracy differentiated by “Chinese characteristics” instead of blanketly criticizing democracy (Kim, Reference Kim2019, para. 2). Still lacking specific democratic processes and values, this effort sought to protect China’s domestic and international image. Beijing shifted its strategy to project China’s system as legitimate and necessary for its prosperity. China’s concerns about its global standing contextualize its foreign disinformation strategy.

Kendall-Taylor and Shullman (Reference Kendall-Taylor and Shullman2018, p. 1) assert that China “used a subtler and more risk-averse strategy, preferring stability that is conducive to building economic ties and influence.” This argument contends China’s approach focuses on offering resources to weaker democracies to distance them from the West (Kendall-Taylor & Shullman, Reference Kendall-Taylor and Shullman2018). These CCP efforts would be less influential without Russian and other countries’ concurrent campaigns to erode democratic fixtures.

The CCP’s economic, relatively risk-averse, and opportunistic strategies align with this research’s examination of Chinese online disinformation. We contend there is a combination of an insufficient public understanding of past foreign interference operations from China coupled with a marked increase in China’s disinformation activity in recent years. Kalathil (Reference Kalathil2020, p. 38) notes the CCP’s foreign influence operations were seen as mainly consisting of apparent, innocuous propaganda on official channels for “relatively minimalist and ineffective” outcomes. However, the CCP’s approach had likely been oversimplified and underestimated. Kalathil (Reference Kalathil2020) adds that this analysis ignores China’s longtime foreign influence activities to affect the international information environment in their favor.

The CCP seeks to alter broader narratives through private business and journalism, making China’s strategy more complex than merely promoting its economic interests. The likely underestimation of China’s foreign influence through disinformation partially stems from the deception involved. Evidence of Chinese computational propaganda existed largely within its borders through QQ, WeChat, and Weibo until 2019 (Bradshaw & Howard, Reference Bradshaw and Howard2019). The CCP can maintain power over messaging in China by employing censorship and banning social networks. Despite their ban for Chinese citizens, the CCP’s interest in global platforms indicates a wider strategy. Their strategy targets norms, standards, infrastructure, governance, and technology (Kalathil, Reference Kalathil2020).

Two central trends of the CCP’s disinformation against democracies are a notable rise in disinformation campaigns in recent years and a greater investment in technology for foreign influence. The following sections analyze examples of this shift to include increasingly global platforms and audiences related to our analytical framework’s themes of political transitions, societal divides, and ambient disinformation.

Analytical Framework Applied to Chinese Campaigns

Ongoing disinformation proves significant to China’s aims to emit a positive reputation. Online campaigns enable China to project favorable information while also burying criticisms. Within China, the CCP can swiftly erase dissenting comments on Chinese platforms while banning other networks. Disinformation allowed China’s reach across social networks to grow commensurately with its global stance (Lee Myers & Mozur, Reference Lee Myers and Mozur2019). While the CCP can censor and crack down on unwanted information to curate its narrative within China, it can also deploy bots to bury unwanted information and spread pro-CCP messaging across global networks.

In addition to creating supportive press for China and unfavorable press for adversaries, some CCP disinformation aims to deflect attention from China’s perceived national failings, including COVID-19’s early mishandling, by inundating Chinese and global audiences with heightened campaigns to drown criticisms (Lee Myers & Mozur, Reference Lee Myers and Mozur2019; Wallis et al., Reference Wallis, Uren, Thomas, Zhang, Hoffman, Li, Pascoe and Cave2020). Given the CCP’s intent to maintain positive narratives and geostrategic relationships globally, ongoing disinformation proves valuable in burying negative commentary and projecting pro-China content.

China–Australia relations provide a compelling example of this strategy. Australia’s democracy experienced decades of economic benefits from its relationship with China, including Chinese tourism and student populations (Searight, Reference Searight2020). However, the revelation of Chinese disinformation to impact Australian politics and discourse stirred the relationship, with some attempts to rebuild trust (Morrison, Barnet, & Martin, Reference Morrison, Barnet and Martin2020; Wallis et al., Reference Wallis, Uren, Thomas, Zhang, Hoffman, Li, Pascoe and Cave2020; Woo, Reference Woo2023). Evidence revealed CCP-linked disinformation and political donations aspiring to influence major Australian political parties’ policies on China (Searight, Reference Searight2020; Wallis et al., Reference Wallis, Uren, Thomas, Zhang, Hoffman, Li, Pascoe and Cave2020). Because China’s ongoing disinformation and soft power in Australia are pervasive, this dynamic presents economic and security challenges for their relationship’s future.

While these ongoing disinformation efforts targeting democracies serve as a backdrop to the international system, there are marked periods when China opportunistically exploits democratic vulnerabilities through societal divisions. This chapter next analyzes Taiwan as a case of Chinese influence during political transition before assessing the Hong Kong protests as Chinese influence during social and cultural divides.

Influencing Transitions of Political Power

China’s relationship with Taiwan and Hong Kong is contested and complex; therefore, this research acknowledges China’s disinformation against Taiwan and Hong Kong’s pursuit of democracy as having both domestic and international aspects (Hernández & Lee Myers, Reference Hernández and Lee Myers2020; Horton, Reference Horton2019). With different political systems, Taiwan and Hong Kong showcase examples of China’s interference in democratic institutions through disinformation.

Taiwan

Taiwan presents a timely example of China targeting its influence operations at democratization and political transitions. Themes of China’s online disinformation include undermining Taiwan’s ruling Democratic Progressive Party (DPP) and President Tsai Ing-wen (Steger, Reference Steger2020). The CCP recognized Taiwan’s democratization complicates its aims to acquire control of Taiwan (Cole, Reference Cole2019). China is willing to pursue great lengths to interrupt Taiwan’s democratization, including military force and information warfare (Hernández & Lee Myers, Reference Hernández and Lee Myers2020).

Inaccurate information across Facebook, YouTube, and Twitter (now X) targeted Taiwan’s democratic institutions, including presidential elections and DPP leadership. Wallis et al. (Reference Wallis, Uren, Thomas, Zhang, Hoffman, Li, Pascoe and Cave2020) uncovered increased Twitter accounts created by China’s Ministry of Foreign Affairs diplomats, spokespeople, state media, and embassies through 2018 and 2019. Disinformation from China included homophobic rumors regarding President Tsai Ing-wen’s sexuality. These disinformation attacks capitalized on the resentment some citizens felt toward her policies supporting LGBTQIA+ rights (Steger, Reference Steger2020). Fact-checking organizations debunked unfounded claims; however, closed messaging groups’ popularity throughout Taiwan proved troublesome as users could privately forward untrue information (Steger, Reference Steger2020).

Taiwan’s response showcases an instructive model for fighting foreign influence from disinformation. Taiwan fined CtiTV, a primary cable network, in response to inadequate fact-checking and created its Department of Cyber Security and ministry-specific disinformation detection task forces (Halpert, Reference Halpert2020). Before Taiwan’s 2020 presidential election, the country passed the Anti-Infiltration Act to impose potential fines and prison sentences for actors peddling disinformation, obstructing elections, or interfering with international politics (Halpert, Reference Halpert2020). China’s Taiwan Affairs Office spokesperson asserted the act generates “panic that everyone is treated as an enemy” and China has never engaged in “elections in the Taiwan region” (Reuters, 2019, para. 6). The wording of the “Taiwan region” rejects its autonomy, and the framing of this policy as hurting “everyone” offers context to inform our analysis. The “infiltration sources” Taiwan’s legislation aims to protect against are understood to mean Chinese interference (Reuters, 2019, para. 4). Taiwanese media focuses on Chinese intelligence as a key player in discrediting Taiwan President Tsai Ing-wen and her advocacy for Taiwan’s independence (Aspinwall, Reference Aspinwall2020). Taiwan’s efforts to institute anti-disinformation laws in reaction to China follow examples from Finland and Estonia as recipients of Russia’s foreign influence through disinformation (Aspinwall, Reference Aspinwall2020).

Taiwan’s societal divides aid Chinese disinformation’s effectiveness, such as disinformation against LGBTQIA+-supporting Taiwanese President Tsai Ing-wen leveraging certain citizens’ homophobic sentiments. Taiwan’s 2020 election depicts a period of vulnerability through the potential transition of power, which overlaps with societal divides the CCP exploited to undermine Taiwan’s president. These divisions present opportunities for further analysis of Chinese influence against democratic institutions.

Social and Cultural Divides
Hong Kong Protests.

In recent decades, grassroots protests became increasingly menacing for authoritarian regimes. Hong Kong presents a noteworthy case of Chinese interference in democracy. There appears to be a three-pronged approach to China’s online disinformation targeting Hong Kong: (1) accusations that the protesters are dangerous and violent, (2) allegations of United States and other interference being involved, and (3) demands to support the police in Hong Kong and undermine protests (Wallis et al., Reference Wallis, Uren, Thomas, Zhang, Hoffman, Li, Pascoe and Cave2020).

The CCP’s strategy shifted from efforts to contain democracy in Hong Kong to more direct control, influencing its use of disinformation to be more interventionist. Chinese responses to the Hong Kong protests were to increasingly suppress opposition, especially through technological methods, including disinformation amplification on social networks (Sinkkonen & Lassila, Reference Sinkkonen and Lassila2020). In 2019, China’s government incorporated worldwide social media platforms to paint democracy advocates in Hong Kong as violent radicals lacking popular opinion (Bradshaw & Howard, Reference Bradshaw and Howard2019; Lee Myers & Mozur, Reference Lee Myers and Mozur2019). Digital repression has not entirely eclipsed physical means of influence but offers new opportunities for attacks against adversaries (Sinkkonen & Lassila, Reference Sinkkonen and Lassila2020). Disinformation is a dangerous method for regimes to distort or otherwise control narratives about the extent of physical measures, popular opinion, and other forms of interference. Twitter and Facebook removed various accounts tied to the CCP’s efforts to undermine the protests in Hong Kong, and Twitter announced policies to prohibit “state-controlled news media” advertisements (Halpert, Reference Halpert2020; Twitter Inc., 2019). However, limited staff, scale, and other factors complicate timely response.

China shared content through social and state media to increase nationalist and anti-Western views. This strategy includes manipulating photos and videos to subvert protesters or label the protests as a dangerous gateway to terrorism (Lee Myers & Mozur, Reference Lee Myers and Mozur2019). In mid-2019, Twitter, Facebook, and YouTube uncovered and suspended accounts tied to Beijing propagating Hong Kong protest disinformation (Alba & Satariano, Reference Alba and Satariano2019). These efforts intensified opposing views. As seen from Hong Kong, the demonstration movement was popular. However, the Chinese perspective led people to believe a violent, small group of radicals yearned to tear China apart.

As China advances its disinformation strategy, it likewise increases its expertise and ability to leverage social media’s reach to networks worldwide. The CCP employs teams of citizens to “actively shape public opinions and police speech through online channels” (Bradshaw & Howard, Reference Bradshaw and Howard2019, p. 17). Networks of people and bots amplify disinformation to manipulate opinion (Alba & Satariano, Reference Alba and Satariano2019). China invests in professionalizing its disinformation tactics, including formal organizations with hiring plans and bonuses for performance (Alba & Satariano, Reference Alba and Satariano2019).

These data support the conclusion that China uses social media networks to exert influence and power worldwide, particularly regarding claims of land in Taiwan and Hong Kong as its sovereign territory. The extent of Chinese disinformation campaigns aimed at these regional territories reflects the aims and energy of the CCP to such ends until recently. These geographically close and territory-based ends are critical, but they are not the only form of influence operations as China’s aims evolve with its growing geostrategic power.

Analysis and Insights

Our analytical framework is applicable across authoritarian contexts and finds certain similarities in its analysis of Russian and Chinese disinformation. Moscow and Beijing are both prominent disinformation-propagating actors in the international system, and they target robust Western democracies and neighboring states of strategic importance. However, it would be a mistake to assume the substance, aims, and contexts of these different authoritarian states’ disinformation campaigns are neatly comparable. Each authoritarian state – and democratic target – has specific complexities.

While this research’s analytical framework offers valuable insight broadly, there are core differences in how, to what aims, and to what extent these authoritarian states deploy their disinformation campaigns. Russia and China may share interests in diminished Western powers and antidemocracy efforts, but different ideologies and objectives drive the states’ activities (Jeangène Vilmer & Charon, Reference Jeangène Vilmer and Charon2020). Each authoritarian state’s disinformation campaigns against democracies requires analysis within the dyadic relationship between the authoritarian state and the target of its disinformation.

Distinguishing Aims and Leadership Profiles

While Russia and China’s disinformation campaigns share similar strategic aims, they may or may not use similar strategies. Broadly speaking, Russia builds on its longstanding history of propaganda for a more direct, manipulation-driven approach, while China invested heavily in technological innovation more recently for a gradual, permeating, censorship-driven approach. The CCP prioritizes its efforts to show China’s global image as respectable, uncorrupt, and morally sound and subdues content indicating otherwise (Jeangène Vilmer & Charon, Reference Jeangène Vilmer and Charon2020). Russia is less concerned with its image and appears as a “well-armed rogue state” aiming to disrupt the present international order (Dobbins, Shatz, & Wyne, Reference Dobbins, Shatz and Wyne2019, para. 1). While China seeks increasing power, it is less concerned with Russia’s goals to disrupt the system because it holds greater power and faces steeper reputational risks. Part of Russia’s impetus for disinformation is its weakening global stance and lack of benefits from the present international order (Jeangène Vilmer & Charon, Reference Jeangène Vilmer and Charon2020). The Kremlin is thus willing to embrace risks in its disinformation strategy to disrupt the system for greater power and influence.

In contrast to Russia, China more strategically prioritized economic relationships, including investment, trade, and development assistance, to grow its influence in the international system (Dobbins, Shatz, & Wyne, Reference Dobbins, Shatz and Wyne2019). China’s export of products and services aids the country financially and helps develop other countries’ relationships with – and dependency on – China. Chinese exports’ breadth showcases Beijing’s infrastructure investments to lead technologically and globally (Sinkkonen & Lassila, Reference Sinkkonen and Lassila2020). China’s Belt and Road Initiative incorporates a Digital Silk Road component to further Chinese global advancement (Greene & Triolo, Reference Greene and Triolo2020). These relationships make China more risk-averse, reputation-oriented, and gradual in its disinformation than Russia.

China holds a more dominant role in the current international order and must be more careful and less overtly aggressive in its strategy. Despite their deflection and denial, Russia and China do little to hide their interference efforts because there is little need. With low costs, lack of gatekeepers, and accessible social networks, disinformation is an alluring way to further aims with relative anonymity (Garcia, Reference Garcia2020). While research indicates the most likely source of disinformation against a specific democracy, exact proof of culpability is challenging. The disinformation-deploying state can evade consequences by denying responsibility while inflicting ramifications on the targeted state.

If President Xi’s foreign policy becomes more aggressive, these differences in Russian and Chinese aims may shrink. For the first time in July 2021, China announced sanctions against Western institutions criticizing them (Rudd, Reference Rudd2022). Another example is China’s growing interest in near abroad territories and lack of concern for positive relations with certain democracies, particularly Taiwan. Rudd (Reference Rudd2022, para. 3) contends “ideology drives policy more often than the other way around” under Xi’s leadership, and he seeks to strengthen the Communist Party by stirring nationalism and asserting foreign policy to solidify China’s power. In contrast, President Putin projects a “clever, manipulating strongman” image to protect his power as a ruler (Kovalev, Reference Kovalev2023, para. 4). A strongman losing the appearance of might could lead to drastic action or demise. The Russia–Ukraine war, especially the Wagner Group rebellion, prompted doubts about Putin’s crafted image (Kovalev, Reference Kovalev2023; Sly, Reference Sly2023). While Putin moves to preserve his leadership status, Xi appears driven by a “Marxist-inspired belief” that Chinese strength means a “more just international order” (Rudd, Reference Rudd2022, para. 3). These approaches centering the person versus the party lead to different strategies for authoritarianism.

While it is unlikely that China will forgo its priority of positive international relations, President Xi is likely watching the Russia–Ukraine war closely as it assesses possible territorial moves. While analysis of disinformation campaigns needs to happen within the individual dyads of the instigating state and the targeted state, it is useful to analyze the extent of partnerships between or among authoritarian states in pursuing their interests to undermine democracy. This is especially true when interests align, such as Russia in Ukraine and China in Taiwan for territorial conquests.

Power Relations and Strategic Cooperation

This inquiry requires analysis of strategic cooperation between authoritarian states to target democracies. China and Russia share a border and adversarial US relations, with Xi and Putin projecting an “intimate” friendship (Lau, Reference Lau2023, para. 1). Russia and China’s agendas share aims to influence the hallmarks of democracy, including free speech and public debate. Both measure their power relative to Western democracies, meaning that weakened democracy could increase their strength (Kendall-Taylor & Shullman, Reference Kendall-Taylor and Shullman2018). Both states leverage disinformation to legitimize authoritarian systems and delegitimize democratic systems. While some cooperation between Russia and China exists, this dynamic is more nuanced than simply an authoritarian alliance. Despite sharing similar adversaries, authoritarian regimes are also competing with each other.

Disinformation-propagating authoritarian regimes learned from each other’s anti democracy campaigns, especially from Russia under Putin, to grow more robust disinformation strategies. With the Russian government’s disinformation appearing as the most visible, it is understandable yet simplistic to apply the same motivations to China. China is unlikely to cooperate extensively with Russia and its aggressive approach for fear of losing its dominant position in the international trading system. Unless necessary, the CCP will likely tread carefully to avoid harming its economic and strategic relationships.

While priorities differ, authoritarian states’ collective efforts create a “more corrosive effect on democracy than either would have single-handedly” (Kendall-Taylor & Shullman, Reference Kendall-Taylor and Shullman2018, p. 1). Russia and China’s aims are specific to their regimes, yet they operate within the same antidemocratic ecosystem. Without Russian campaigns to erode global commitment to democracy and democratic institutions, China’s foreign interference efforts would likely be less powerful (Kendall-Taylor & Shullman, Reference Kendall-Taylor and Shullman2018).

Some experts designated 2020 and 2021 as “years for Russian–Chinese science cooperation with the focus on communications, AI and the Internet of Things” building off of past partnerships like 2019’s “Sino-Russian Joint Innovation Investment Fund” (Sinkkonen & Lassila, Reference Sinkkonen and Lassila2020, p. 6). As much of this cooperation would presumably occur in private due to disinformation’s deceptive nature, we lack concrete evidence of the extent the governments are strategically cooperating.

As there are moving pieces to disinformation campaigns and the states’ individual aims vary, coordination may not be necessary. Russia, China, and other authoritarian states can and do act independently and amplify each other when it aligns with individual interests (Jeangène Vilmer & Charon, Reference Jeangène Vilmer and Charon2020). Given Beijing’s central aim to promote pro-CCP messaging, it is unlikely Moscow would spread pro-CCP propaganda without benefiting the Kremlin or eroding adversaries’ powers. Although antidemocracy disinformation serves both states’ interests, applying a blanket model to authoritarian regimes’ influence operations would be a disservice to countering interference.

Resilience and Democracy

Foreign interference efforts from authoritarian states have implications for democratic health and vulnerabilities, and democracies must take preemptive steps to protect themselves. There are multiple technical and governance strategies that democracies can consider to increase resiliency against targeted disinformation campaigns (RAND Corporation, 2023). Two useful methods are improved digital and media literacy among citizens and partnership-building across affected sectors and countries. Creating and promoting information literacy support for citizens can reduce vulnerabilities to disinformation. Especially in recent years, practitioners created resources for citizens to safely navigate the online information environment, including IREX’s Learn 2 Discern curriculum, Stanford’s Putting Civic Online Reasoning Program initiative, Google’s Interland, and the News Literacy Project’s Checkology curriculum (Brooks, Reference Brooks2020; News Literacy Project, 2023; RAND Corporation, 2023; Stanford History Education Group, 2023). Civil society education alone is insufficient against targeted online disinformation, but greater information literacy is vital for democracies to stay resilient against foreign interference (Brooks, Reference Brooks2020).

Another means to reduce democratic vulnerabilities to online disinformation is through partnerships. Much like disinformation-propagating outlets partnered to spread their nefarious content, collaboration among reputable outlets can reduce democratic vulnerabilities. For disinformation happening in the context of Hong Kong or Ukraine, partnerships between citizen journalists and mainstream media can strengthen the quality of information (Brooks, Reference Brooks2020; Huang, Reference Huang2020). In places where high-risk geopolitical events or other conditions make international reporting challenging, networks of trusted citizen journalists are critical to mitigating erroneous information and reporting timely authentic news (Visram, Reference Visram2020).

By combining education and partnerships, Taiwan’s response showcases a whole-of-society approach to combating Chinese disinformation (Huang, Reference Huang2020). The creation of laws, notably the Anti-Infiltration Act, imposed potential prison sentences and heavy fines for people spreading disinformation (Aspinwall, Reference Aspinwall2020; Halpert, Reference Halpert2020). Taiwan’s institutional approach formed ministry-specific disinformation detection task forces and supported programs led by Taipei’s digital minister. Other countries implemented other forms of regulations and protections, and democracies should continue to assess what models work best for their online and in-country contexts.

For democracies to stay healthy and competitive against targeted disinformation, they must create policies and safeguards to protect the free flow of reliable information online. Sokol (Reference Sokol2019, p. 1) describes how successful disinformation campaigns are “based on a core of truth … distorted and exaggerated beyond recognition.” The challenge of disinformation requires democracies to protect their institutions in mundane times and times of greatest vulnerability. So long as they support transparency, public debate, and free speech, democracies will likely remain targets of disinformation. By addressing the root of divisions within democracies, they have greater resiliency to foreign influence efforts.

Conclusion

Technology has become a permanent fixture in society and will continue to pose opportunities – and challenges – to the international system. As new technologies evolve, so must democracies to stay resilient against foreign adversaries’ attempts to weaken democratic institutions through disinformation.

This research analyzed the extent to which and how foreign online disinformation campaigns target democracies, specifically disinformation from the Russian and Chinese governments toward specific democracies. We found Russia is a manipulation-driven, rogue state with a history of disinformation aiming to disrupt the international order as it senses its slipping power. China likewise seeks increasing power, yet is less concerned with disrupting the international system because it holds a more powerful global position and deploys a more gradual, censorship-driven approach.

Assessing authoritarian states’ online disinformation campaigns through the same analytical framework proves useful; yet assuming all authoritarian states operate similarly is a mistake. Each authoritarian state’s disinformation campaigns necessitate assessment within dyadic relationships between disinformation-propagating states and their targets. Examples of democracies targeted by Russia and China fit into our analytical framework of long-term, ambient disinformation, transitions of political power, and social and cultural movements. They also exposed patterns in targets primarily categorized as either Western powers or neighboring states of strategic importance.

The main distinctions between Russia and China and their various targets reflect different aims, objectives, risks, and contexts for using targeted disinformation depending on the target state. Another line of argumentation is that disinformation-propagating authoritarian regimes learned from each other’s campaigns and partnered to some extent in antidemocracy efforts. While Russia and China may share interests in authoritarianism and diminished Western powers, different ideologies, objectives, and relationships drive them. Cooperation among authoritarian states appears to be out of convenience more than partnership, as they are simultaneously competing with each other.

Russia and China present two of the authoritarian powers most threatening to democracies with their strong capacities to deploy antidemocratic disinformation. Russia appears as a more immediate and brazen threat; however, China presents a long-term threat to democracies as it strategically grows its geopolitical power. China is more concerned with promoting its positive image and creating reasonable working relations with Western democracies than Russia’s abrasive approach. The recent progress in China’s disinformation ecosystem merits continued analysis in the coming years, coupled with lessons learned from Russia’s extended disinformation ecosystem, particularly in the Russia–Ukraine war. Application of this research’s analytical framework to other authoritarian regimes, including Iran and North Korea, can assess additional influence efforts.

A healthy democracy relies on trust and access to quality information. Social networks drastically changed the information environment, offering opportunities for increased online information and disinformation. We found varying authoritarian strategies to undermine faith in democratic processes through disinformation. The extent to which these efforts swayed outcomes is tangential to how these campaigns worsened societal divides and eroded trust among citizens. Even the most resilient democracies must protect themselves and their information environments against the ongoing threat of foreign interference.

4 Cyber Challenges to Democracy and Soft Power’s Dark Side

Despite initial hopes that advances in information technology would spread and deepen democracy around the world, new platforms for communicating have instead provided opportunities for the weakening of democracy. Social media, website hosting, messaging apps, and related technologies provide easy and cheap ways for micro-actors such as individuals and small groups (in addition to more traditional state and non-state actors) to wield soft power for antidemocratic purposes. Of course, the probability that any one soft power action on the part of a micro-actor will have a consequential effect in the world is minuscule. Yet cyber-enabled micro-actions by micro-actors can make a difference, often one that has a negative effect on democratic institutions. QAnon, the anti-vaxx movement, and the spread of racism, antisemitism, and Islamophobia online are just a few examples of the effects of aggregated micro exercises of soft power. Spreading mis- and disinformation as well as divisiveness and hateful messages through social media is a kind of malign soft power that undermines democracy.

Although soft power is generally thought to be the “good” kind of power, since it works by attraction rather than coercion,Footnote 1 it has a dark side (Marlin-Bennett, Reference Marlin-Bennett2022) when the motive is to harm – and I include undermining democracy as a kind of harm. While adversarial state actors or other large and well-resourced non-state actors can easily deploy malign soft power, micro-actors can as well because the costs of doing so are low.Footnote 2 Exactly who is acting is often unknown, which complicates how to fight back. Furthermore, even when micro-actors are behind antidemocratic activities, such efforts may hook up with the interests of states or non-state adversaries that seek to subvert democracy and weaken democratic states. Despite the difficulties of identifying the actors who wield this power, we can observe the wieldingFootnote 3 of it as flows of information move through social networks.

This chapter provides a framework for analyzing antidemocratic soft power that focuses on flows of antidemocratic messaging. I begin with a review of the initial hopes for information technologies’ contributions to a more democratic world and why those hopes were dashed. In the second section, I explain why it is reasonable to analyze these attacks on democracy by focusing first on the information as it flows rather than on the actors who are attacking. Actors, I argue, are emergent, which is especially relevant in the context of cyber-enabled technologies. In the third section, I focus on conceptualizing soft power, including its malign form that can be used to undermine democracy. In the fourth section, I examine the wielding of antidemocratic soft power through control of information flows (content, velocity, and access). The chapter ends with brief concluding comments.

Initial Hopes

In 2004, the internet became truly interactive. Previously, accessing the internet meant users seeing the information that was provided to them and providing the information that was requested of them. Web 2.0, the web of social media, allowed users to post content they chose to share online and to do so in a way that allowed their posts to be visible to their friends or interested others. This was widely considered to be a very good thing. By 2006, Time Magazine named “You” the person of the year, because “[y]ou control the Information Age” by providing content. Lev Grossman, then a technology writer for the magazine, and also a novelist, praised internet users – “for seizing the reins of global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game.” The tribute continues with a minor caveat, but ends with what in hindsight seems to be recklessly positive spin:

Sure, it’s a mistake to romanticize all this any more than is strictly necessary. Web 2.0 harnesses the stupidity of crowds as well as its wisdom. Some of the comments on YouTube make you weep for the future of humanity just for the spelling alone, never mind the obscenity and the naked hatred.

But that’s what makes all this interesting. Web 2.0 is a massive social experiment, and like any experiment worth trying, it could fail. … This is an opportunity to build a new kind of international understanding, not politician to politician, great man to great man, but citizen to citizen, person to person. It’s a chance for people to look at a computer screen and really, genuinely wonder who’s out there looking back at them. Go on. Tell us you’re not just a little bit curious.

(Grossman, Reference Grossman2006, n.p.)

Many others writing in the popular media and scholarly/scientific literature echoed this optimism, sometimes downplaying the obvious caveats. Canadian commentators Don Tapscott and Anthony D. Williams argued for a positive vision of the transformative nature of information technology-enabled interaction, focusing on the possibilities for collaboration to radically alter (in a good way) business and society (Tapscott & Williams, Reference Tapscott and Williams2006). In later work they also claimed that collaborative governance – citizens being able to weigh in on domestic and transborder policy issues – would improve the democracy of governments and open the world to democratic participation (Tapscott & Williams, Reference Tapscott and Williams2010). In an interview in CIO Insight, an information technology industry trade magazine, Williams lauded the potential of “governance webs” – interactive websites for policy deliberation and sharing of information. The new capacity for interactivity online “provides a mechanism for collaboration of public agencies, the private sector, community groups and citizens.” While Williams cautioned that Web 2.0 would not be a “silver bullet [delivering] world peace,” he still foresaw “a new golden age of democracy” (quoted in Klein, Reference Klein2008, p. 36).

Many of the early scholarly publications on Web 2.0 and its broader social consequences also made optimistic claims about the exciting democratizing potential of e-democracy, though scholarly works tended to be more moderate than publications written for a broader audience. Many scholars acknowledged that various hiccoughs, such as the possible lack of citizen interest in participating online, could limit the democratizing potential of this new technology. For example, Kalnes (Reference Kalnes2009), writing on the 2007 elections in Norway, found that Web 2.0 allowed for greater ease of participation for citizens who wished to engage, even though its effect on pluralism was limited. (See also Anttiroiko, Reference Anttiroiko2010; Boikos, Moutsoulas, & Tsekeris, Reference Boikos, Moutsoulas and Tsekeris2014; Breindl & Francq, Reference Breindl and Francq2008; Costa Sánchez & Piñeiro Otero, Reference Costa Sánchez and Piñeiro Otero2011; Parycek & Sachs, Reference Parycek and Sachs2010; Raddaoui, Reference Raddaoui and Beldhuis2012; Reddick & Aikins, Reference Reddick, Aikins, Reddick and Aikins2012.)

Yet, other scholars provided strong warnings. In an early article on this topic, Cammaerts (Reference Cammaerts2008, p. 359) notes the high hopes that analysts had for a more expansively pluralist society in which anyone could say what they wish (enabling the “radical plurality of the blogosphere”). More to the point, though, he identifies antidemocratic pressures, including those resulting from peer-to-peer intimidation leading to self-censorship, and “the existence of anti-publics, abusing the freedom of expression with the aim to weaken democracy and democratic values” (Cammaerts, Reference Cammaerts2008, p. 372).Footnote 4 (Also see concerns raised by, inter alia, Marlin-Bennett, Reference Marlin-Bennett2011; Schradie, Reference Schradie2011; Van Dijck & Nieborg, Reference Van Dijck and Nieborg2009.) In the years that have passed since those earlier assessments, Cammaerts’ and other scholars’ pessimism has been validated. The failure of this massive social experiment (to use Grossman’s term) has had and continues to have correspondingly negative social repercussions.

Despite prior naive expectations that social media and related forms of communication would only encourage people to embrace democracy, these cyber-enabled technologies have also opened opportunities for actors to use soft power to undermine democracy.

Why Not Start by Figuring Out Who the Bad Guys Are?

The analysis of power usually starts with an assessment of who (or what) is acting on whom. However, the nature of actors on social media and using related technologies is a moving target. All actors continue to change as a consequence of their interactions, and new and often surprising actors pop up (and often disappear). As Berenskötter (Reference Berenskötter2018) suggests, actors’ ontologies and their motives are constructed through their interactions. In this section, I discuss the governing logics of cyberspace and the emergence of actors to buttress my claim that it makes sense to first focus on the flows of information rather than on who is enacting the flows in the analysis of antidemocratic soft power.Footnote 5

Libertarian and Neoliberal Logics

Cyberspace, encompassing the interactions among users of websites, social media platforms, messaging, and related technologies, has been constituted through the libertarian and neoliberal logics that generate the technical and regulatory structures of internet development and policymaking. Cyberspace consequently usually permits anonymity, pseudonymity, and even spoofing.Footnote 6 Although early in the development of the internet many computer scientists and engineers held a soft socialist view that saw code as something to be shared and the internet as something of a public service, by the 1980s that had changed (in tandem with larger social shifts). Internet developers had adopted a Silicon Valley worldview: “‘Technolibertarianism’ became one of the central ideologies of the Internet” (Rosenzweig, Reference Rosenzweig1998, p. 1550; see also Borsook, Reference Borsook1996). Decentralized, participatory decision-making procedures, which in the earlier period signaled equality and camaraderie, were repurposed to fit libertarian norms of limited government intervention and individualism. The emphasis on individual liberty fits with allowing the individual to hide their identity as the default setting, making anonymity and pseudonymity permitted and warranted (See Berker, Reference Berker2022 on deontic and fitting categories.) Put differently, it would have been possible to engineer into internet systems a strong requirement or an expectation that users would have to identify themselves truthfully to have an online presence. That did not happen.

The move to libertarian logics included a shift toward business and profit-making, prioritizing the commodification of information and the protection of intellectual property (Marlin-Bennett, Reference Marlin-Bennett2004), all of which overlap with neoliberal logics. The Clinton administration designed support for neoliberal logics into its formative internet policies, as evinced by the decision to open the internet to commercial activities and by the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) as a private, nonprofit, multistakeholder organization that would perform internet governance functions. In doing so, the United States maintained its hegemonic position and the power of corporations in cyberspace (Taggart & Abraham, Reference Taggart and Abraham2023).Footnote 7 To require identification by default would have added friction to the system, which was optimized for efficient, fast transactions of a market. What this meant for users’ anonymity and pseudonymity was that there was no reason to change course and require that users identify themselves truly by default. Nor was there any requirement that a person (natural or legal) could have only one cyber identity. In short, the libertarian and neoliberal underpinning of the structure of the internet is constitutive of cyberspace in which users generally may keep their (multiple) identities hidden.

Actors as Emergent

Furthermore, actors and agencies are emergent (Abraham, Reference Abraham2022; Chatterje-Doody & Crilley, Reference Chatterje-Doody, Crilley, Stengel, MacDonald and Nabers2019; Dunn Cavelty & Jaeger, Reference Dunn Cavelty and Jaeger2015; Elder-Vass, Reference Elder-Vass2008), regardless of whether the actors are identified, pseudonymous, or anonymous. As Karen Barad explains: Actors “do not preexist their interactions; rather [actors] emerge through and as part of their entangled intra-relating.” Emergence continues as actors “are iteratively reconfigured through each intra-action” (Barad, Reference Barad2007, p. ix). When actors’ identities are reasonably stable, efforts to identify bad actors open opportunities for deterring their actions in the future. That is often the case when actors are states or recognized as institutionally coherent non-state actors, though even if they do exist in a recognizable form, they are changing.

The emergence of surprising new micro-actors whose messages undermine democracy is not a novel phenomenon of the Information Age. Henry Ford (1863–1947), well-known as the founder of Ford Motor Company, surprisingly became a leading proponent of antisemitic hate and disinformation. His hate-filled screed, The International Jew, was subsequently used for Nazi propaganda (Flink, Reference Flink2000). The anti-Black and antisemitic Ku Klux Klan, which was created in 1865 by a group of Confederate Army veterans in Pulaski, Tennessee, is another example (Baudouin, Reference Baudouin2011; Quarles, Reference Quarles1999).

However, cyber-enabled technologies, because of their affordability, make it easier for surprising new micro-actors to participate in spreading hate, divisiveness, and mis- and disinformation. The resurgence of racism, antisemitism, Islamaphobia, and other forms of intergroup hatred spread through cyber-enabled technologies, and a large proportion of people were exposed to abusive content (ADL, 2023; Vidgen, Margetts, & Harris, Reference Vidgen, Margetts and Harris2019). Cyber-enabled technologies have allowed anti-vaxx groups to share anti-vaxx misinformation and promote vaccine hesitancy and rejection, eroding trust in public health agencies and causing a drop in vaccination rates, as these groups become a “political force” in democratic societies (Piper, Reference Piper2023; see also Burki, Reference Burki2020; Wilson & Wiysonge, Reference Wilson and Wiysonge2020). Perhaps most surprising is the pseudonymous QAnon, which first emerged on the 4chan social media site. QAnon adherents spread a bizarre conspiracy theory that combines alt-right divisiveness, disinformation about the outcome of the 2020 US presidential election, confabulation about the so-called deep state, and hatred for Jews, the LGBTQ+ community, and immigrants (QAnon | ADL, 2022). Many of the participants in the January 6, 2021, attack on the United States Capitol were QAnon followers (Lee et al., Reference Lee, Merizalde, Colautti, An and Kwak2022).

In the next two sections, I focus upon conceptualizing malign soft power and exploring how it is wielded using cyber-enabled technologies.

Conceptualizing Malign Soft Power

Cyber-enabled technologies can be used to wield soft or hard power. Hard power cyberattacks, including sabotage of critical infrastructureFootnote 8 and ransomware, is only deployed in adversarial situations. Soft power, on the other hand, seems almost friendly.Footnote 9 The usual view is that soft power practices “contribute to a positive image that endears nations with soft power to other nations, which in turn enhances these soft power nations’ influence in world politics” (Gallarotti, Reference Gallarotti2011, p. 32, stress added).Footnote 10 Those who advocate for relying on soft power over hard claim that soft power “cultivates cooperation and compliance in a much more harmonious context” than hard power does (Gallarotti, Reference Gallarotti2022, p. 384, stress removed).Footnote 11 Other scholars have questioned this positive view. Successfully wielding soft power – even of the most pleasant sort – is forceful, in the basic sense of getting others to do what they would not have otherwise done, which means that interests have been denied or manipulated (Bially Mattern, Reference Bially Mattern2005; Hayden, Reference Hayden2012, Reference Hayden2017). Soft power can also have negative unintended consequences (Johnson, Reference Johnson2011; Siekmeier, Reference Siekmeier2014). And to the point for understanding how soft power can negatively impact democracy: Soft power can be wielded in ways that undercut democracy.

As such, antidemocratic soft power is a kind of malign soft power, as opposed to good or neutral soft power. Malign soft power is a power of attraction used for harm (Marlin-Bennett, Reference Marlin-Bennett2022), but discerning that a particular action was motivated by a wish to do harm is difficult. Nevertheless, even when we cannot see who is acting – as is often the case with cyber-enabled technologies – we can reasonably infer the motivation behind an action by drawing upon our own practical reasoning. Members of society routinely make such judgments, and inferring motivations from actions and their consequences is a normal part of social life. While it is possible that otherwise well-meaning people accidentally do something that has a malign consequence or is misconstrued does not cancel out the quotidian way people adjudge actions. In addition, judgments of harm depend on the standpoint of whoever is making the determination,Footnote 12 and I acknowledge that my analysis comes from a pro-democracy position.Footnote 13 As I discuss in the next section, messages of divisiveness, hate, and mis- and disinformation become attractive (and therefore powerful) when they seduce, trick, or amuse those who are exposed to them into feeling that they share the sentiments.

Ironically, in Western democracies, wielding malign soft power is often legal, which further sets this kind of power apart from hard power. Democracy’s guarantee of freedom of speech makes punishing antidemocratic speech more difficult, though jurisdictions have different laws about the limits of protected speech. (European laws generally have more limits on speech than the US laws, but rights to freedom of expression still provide ample room for the lawful spread of politically divisive messages.) In some cases, state actors (e.g., Russia and China) initiate and/or amplify these messages; in other cases, they may simply be “home grown.” How these messages are received is important, too. Most people who see a message of hate on X (formerly Twitter) or a bit of disinformation on TikTok will probably not be susceptible to the soft power lure of the antidemocratic content and will simply move on to the next message, one that is unlikely to be similarly problematic. However, some people will be affected in that they will be attracted to the underlying antidemocratic message and the normalization of antidemocratic language and images is problematic.Footnote 14

Wielding Malign Soft Power: Controlling Flows of Antidemocratic Information

Wielding malign soft power, the action of powering the soft power, refers to the instance of controlling flows of messages that are politically divisive, hate-filled, and/or mis- or disinformation through social networks, exposing recipients, some of whom are attracted by the antidemocratic messages. The flows have the properties of content (the messages that are divisive, hate-filled, or mis- or disinformation), velocity (direction and speed of the messages), and access to them (by choice of the person who is exposed, by chance, or by force).Footnote 15 Focusing on these properties allows patterns of actions rather than actors to be the subjects of analysis and rule in an interactional, social sense (Szabla & Blommaert, Reference Szabla and Blommaert2020).

Content

The property of content refers to messages being transmitted and the emotions they convey. Message density is important as well, as the more times a particular message is received, the more it seems to be correct because it is common knowledge (Unkelbach et al., Reference Unkelbach, Koch, Silva and Garcia-Marques2019). I focus here on three commonly deployed modes by which the content works: seduction, trickery, and amusement.Footnote 16 When malign soft power seduces, tricks, or amuses, it does so through a combination of semantic and emotional content of messages.

Seduction links semantic content to emotion by invoking desire in the recipient. Online, as in person, we recognize how attracting through seductionFootnote 17 (of the sexual (e.g., Faisinet, Reference Faisinet1808) and nonsexual varieties (e.g., Bjerg, Reference Bjerg2016)) can cause harm. Content eliciting these desires spreads online in various ways, including through social media and via niche websites. Antidemocratic seduction convinces people to feel attachments that are inconsistent with democracy, such as hatred for political opponents, rather than simple disagreement with them. The spread of White Christian nationalist identitarianism in North America and Europe, a movement that is profoundly antidemocratic (Zuquete & Marchi, Reference Zuquete and Marchi2023), is an example. Individual participants produce and reproduce seduction through identitarian practices. For example, sharing “dog whistles” that express hatred in coded language gives participants a seductive sense that they are in on the secret (Marlin-Bennett, Reference Marlin-Bennett2022). Much of the content antisemitic feed of Stew Peters (@realstewpeters) on X (formerly Twitter) seeks to draw a White Christian audience into solidarity against Jews. Peters, who has a following on X of over 500,000 accounts (some of which may be automated) also engages in trickery about vaccines and other alt-right topics.

Trickery replaces falsehoods for facts, with claims that lies (known to be false) and/or bullshit (statements that are disconnected from the truth) (Frankfurt, Reference Frankfurt2005) are actually true. This practice is in play when disinformation is spread as misinformation by gullible users. For example, the Stew Peters Network disseminated disinformation about the COVID vaccine in a video, Died Suddenly (Skow & Stumphauzer, Reference Skow and Stumphauzer2022), on “Rumble, a moderation-averse video-streaming platform” (Tiffany, Reference Tiffany2023). Peters also promoted the video on X, Gab, and other social media sites. The many likes his tweets have received suggest that his falsehoods have been received by and are attractive to other users. The many reposts suggest that others spread his disinformation as misinformation.Footnote 18 Peters’ seductive messaging and lies connect with similar views disseminated by other individuals, creating interconnected networks of people who share antisemitism, White Christian nationalism, and anti-vaxx views. The widespread campaign to convince people that Joe Biden stole the US 2020 presidential election works similarly, drawing on trickery and often blending with the seductiveness of White Christian nationalism. Its success can be seen in a March 2023 CNN/SSRS poll that found that 63 percent of Republicans continued to believe it (Durkee, Reference Durkee2023). Individuals who believe in this falsehood have been tricked by antidemocratic mis- or disinformation.

When amusement is used for antidemocratic purposes, the content links humor or other pleasures to messages that in other contexts would be transgressive. A racist joke does not seem as bad to those who find it amusing. The mode of amusement allows funny or entertaining content to seem acceptable even when it is harmful (Apel, Reference Apel2009; Gaut, Reference Gaut1998). Topinka (Reference Topinka2018) examines the Reddit platform and specifically the r/ImGoingToHellForThis subreddit (now banned), in which members of the subreddit used humor to express racism, misogyny, antisemitism, and extreme anti-immigrant sentiments. Topinka provides a close analysis of how the redditors treated the famous, haunting picture of Alan Kurdi, the two-year old Syrian refugee who drowned along with his mother and brother in the Mediterranean Sea while fleeing to the Greek isle of Cos. The posts remixed the image of the dead toddler on the beach into jokes that were at once racist and anti-immigrant extremist. As soft power practices, “[t]he very ostentation on which this humor relies thus functions as a cloak concealing the networks of racist sentiment that the discourse sustains” (Topinka, Reference Topinka2018, p. 2066). This now defunct subreddit and other similar sites simultaneously rely on the democratic principle of free speech and eschew core democratic values. More generally, humor is often used on social media platforms to catch the eye of the user who is scrolling through feeds, be they TikTok or Instagram or another app, to find a quick laugh. Amusement is antidemocratic when it works to normalize hate or convince people of the truthfulness of fallacious claims relevant to current politics. Jokes may intensify the connection between humor and hate (Askanius, Reference Askanius2021; Marlin-Bennett & Jackson, Reference Marlin-Bennett and Jackson2022).

Velocity and Access

Controlling velocity (direction and speed) and access to messages are the other two properties for acting on soft power. Cyber-enabled technologies afford actors at all scales the capacity to manipulate these, but it is not necessarily the case that any specific techniques are relevant solely for antidemocratic purposes. If the content is antidemocratic, then increasing its velocityFootnote 19 – that is, increasing the speed at which the messages move and the spread of the messaging – is a means of wielding soft power to undermine democracy. Directions can be direct (going from a source to an expectant recipient) or circuitous. Speeds of message transmission range from fast to slow, and from constant to intermittent. The metaphor of a unit of information (e.g., a meme) “going viral,” means it is spreading quickly and in multiple directions. Messages can also jump from platform to platform and across technologies (DiResta, Reference DiResta2018).

Bots are usually designed to increase velocity. Bessi and Ferrara analyzed tweets about the 2016 presidential election during a five-week period in the fall of 2016. They found that approximately one-fifth of these tweets were generated by bots. They conclude:

The presence of social bots in online political discussion can create three tangible issues: first, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can become further polarized; third, the spreading of misinformation and unverified information can be enhanced.

(Bessi & Ferrara, Reference Bessi and Ferrara2016, n.p.)

Micro-actors can also increase velocity, which they do by reposting and commenting, as well as by moving posts to new platforms (Marlin-Bennett & Jackson, Reference Marlin-Bennett and Jackson2022). Wielding malign soft power could also work by decreasing the velocity of messages that support democracy.

Controlling access to information is another means of wielding soft power. At stake is whether someone is exposed – or not exposed (denied access) – to information by choice, by chance, or by force. A choice approach means providing access to those who have requested the information, those who are already attracted to the messages. This soft power doubles down on existing attachments. Social media sites like r/ImGoingToHellForThis work this way. Users choose to subscribe and, in doing so, build community among those who are attracted to the divisiveness, hate, and mis- and disinformation. An access-by-chance approach is not targeted at any specific actor but instead involves providing access widely and anticipating that some who happen upon the messages will be swayed by them. The actors behind bot accounts uncovered by Bessi and Ferrara (Reference Bessi and Ferrara2016) probably disseminated manipulated information on X using a chance strategy of sending out a lot of content quickly and widely. The effectiveness of a chance approach depends on whether the messages find a core group of people who are receptive to it. Hindman and Barash (Reference Hindman and Barash2018) also find more real news than fake news on X in the months before and after the 2016 election in the United States, but they remain concerned about the dense networks of followers of popular fake news accounts: “[T]he popularity of these accounts, and heavy co-followership among top accounts, means that fake news stories that reach the core (or start there) are likely to spread widely” (p. 4). (See also Grinberg et al. Reference Grinberg, Joseph, Friedland, Swire-Thompson and Lazer2019.) The movement of antidemocratic messaging thus accelerates when it reaches a community who are disposed to be attracted to it. Forced access involves exposing a user to information in a targeted way but without prior subscription or other sort of confirmation of willingness to receive it. Algorithms that display increasingly extreme messages to users force access to antidemocratic information upon them.

An algorithm’s control of the flow of antidemocratic information is an instance of deploying malign soft power even though the algorithm itself does not have motivation in the same way a human does. Algorithms that determine what appears in social media newsfeed are part of an assemblage (Bennett, Reference Bennett2005) determining access to information. DeCook and Forestal argue that “digital platforms not only curate and channel certain content to individual users but also facilitate a particular mode of collective thinking that [they] term undemocratic cognition” (2023, p. 631).Footnote 20 The practice of curating and channeling is clearly an instance of soft power in which individuals are subject to attraction. Motivations, I suggest, are written into algorithms that collect data, analyze what would attract users in a way that serves the motivations of the assemblage, and then produce a newsfeed that gives users access to certain messaging and withholds access to other messaging. (And because of the vagaries of coding, the outcomes of this process may or may not be wholly what the firm’s management had expected.) While “make money!” is a compelling motivation for many social media firms, concerns have been raised about Chinese-owned social media like TikTok (The Economist, 2023). Kokas notes that China seeks to “manipulate messaging to key [Western] constituencies” for (implicitly) antidemocratic ends (2022, p. 95; see also Zhong, Reference Zhong2023). Additionally, a failure to actively protect against undermining democracy by preventing access to a biased stream of information in one’s newsfeed also suggests a motivation that undervalues the protection of democracy. This is perhaps a lesser kind of maliciousness, one of omission rather commission.

Control over content, access, and velocity usually operate together. For example, information flows that flood social media quickly move a large volume of messages in terms of the number of posts and/or the amount of message within posts (Cirone & Hobbs, Reference Cirone and Hobbs2023). These flows characterized by steady streams of the same untruth contribute over time to making those false claims seem true as a function of “truth by repetition” (Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012; Morgan & Cappella, Reference Morgan and Cappella2023; Unkelbach et al., Reference Unkelbach, Koch, Silva and Garcia-Marques2019).

As Figure 4.1 summarizes, someone or something wields malign soft power for antidemocratic motives by controlling the flows of information. In the background of this analysis are those for whom the information flow is a hard power attack. In the foreground are the targets of malign soft power, the recipients of flows of attractive hate, mis- and disinformation, and other content that undermines democratic institutions. The intended recipients are those likely to be swayed by the information presented to them.

An illustration of wielding malign soft power. Anti-democratic information wielded as malign soft power targets those manipulated and susceptible to coercion. Information flow is determined by content types, velocity, and access.

Figure 4.1 Wielding malign soft power.

Concluding Comments

This chapter makes a simple point: By carefully analyzing antidemocratic efforts using cyber-enabled technologies as malign soft power, we can see that the power of attraction is not necessarily harmless nor even less harmful than the power of coercion. The affordances of social media platforms and other participatory media make it easy for emergent actors to contribute to efforts undermining democracy. Antidemocratic content that is seductive, deceptive, or amusing flows easily and quickly throughout social networks. Some new actors further disseminate information while cognizant of the nature of the content and being willing participants in its spread. Others are seduced, distracted by the fun they are having, or simply duped. These are the unwitting actors who further spread hate and mis- and disinformation.

Who the bad actors are may or may not be easy to discern. The low cost of malign soft power resources means that both existing actors and surprising new ones can wield this kind of power. Actors are emergent: Wielding malign soft power dynamically constitutes actors’ identities. Moreover, that information flows, that it moves over time and space, is key. Manipulating content, velocity, and access and thereby making divisiveness, hate, and mis- and disinformation more available, more quickly, more widely, and to more users is harmful. The common interest is in discovering means of preventing or, if necessary, stopping these power flows and remediating existing harms. While governance can also be harmful, especially if it limits freedom of speech, limits rights to privacy, and is targeted toward vulnerable populations, good governance methods do not single out individuals but rather look at patterns of flows. A purpose of this framework for analyzing antidemocratic soft power is to uncover possible intervention points, the points in the flow of information at which defensive mechanisms can prevent or remediate the harms of malign soft power.

Each of the properties of information flows, the content, velocity, and access, provides opportunities for countering antidemocratic challenges, but undertaking democracy-affirming efforts must be done in a manner that preserves freedom of information. I am mindful of Friedrich Nietzsche’s warning:Footnote 21 “He who fights with monsters should be careful lest he thereby become a monster. And if [you] gaze long into an abyss, the abyss will also gaze into [you]” (2012, p. 83, sec. 146).

Cyber-enabled technologies have become an essential part of life in many ways and are necessary for democracies to function, but these technologies also afford the means to disrupt democracy. Understanding how antidemocratic soft power works and is wielded is just one tool for building resilient democracies.

5 Cyber Disinformation Risk Perception and Control Integration of the Extended Theory of Planned Behavior and a Structural Equation Model

The extensive use of digital platforms and services has expanded widely the range of potential attacks and targets, rendering individuals and the democratic values and institutions vulnerable to a substantial number of cyber-enabled threats. These threats can be sophisticated, conducted on a large scale, and capable of producing significant, viral consequences. Among these threats, cyber disinformation is regarded as a major threat. The phenomenon is widespread and complex, in certain cases, part of hybrid warfare, involving various cyberattacks by nefarious actors, which deceptively distribute fake or incomplete materials, with a view to influencing people’s opinions or behavior.

Disinformation can involve numerous vectors and take several forms. The goal of disinformation campaigns is to promote or sustain certain economic or political interests, discrimination, phobia, hate speech, or harass individuals (European Parliament, 2022). Instances of alleged disinformation can be encountered with respect to a large variety of aspects, such as food (Diekman, Ryan, & Oliver, Reference Diekman, Ryan and Oliver2023); migrants (Culloty et al., Reference Culloty, Suiter, Viriri and Creta2022); fossil fuel;Footnote 1 sexual preferences (Carratalá, Reference Carratalá2023); health hazards;Footnote 2 politics;Footnote 3 and so on.

Successful disinformation campaigns can negatively affect fundamental freedoms, undermine trust, subvert attention, change attitudes, sow confusion, exacerbate divides, or interfere with decision-making processes. Consequently, such campaigns can rightly be considered attacks on knowledge integrity (Pérez-Escolar, Lilleker, & Tapia-Frade, Reference Pérez-Escolar, Lilleker and Tapia-Frade2023, p. 77). The potential consequences can be disquieting, negatively affecting democratic values and institutions (Jungherr & Schroeder, Reference Jungherr and Schroeder2021; Schünemann, Reference Schünemann, Cavelty and Wenger2022). The concerns over cyber disinformation are notable worldwide and received significant attention from researchers (Buchanan & Benson, Reference Buchanan and Benson2019; Nenadić, Reference Nenadić2019; Olan et al., Reference Olan, Jayawickrama, Arakpogun, Suklan and Liu2022; Pierri, Artoni, & Ceri, Reference Pierri, Artoni and Ceri2020; Tenove & Tworek, Reference Tenove and Tworek2019; Ternovski, Kalla, & Aronow, Reference Ternovski, Kalla and Aronow2022; Vaccari & Chadwick, Reference Vaccari and Chadwick2020; Weikmann & Lecheler, Reference Weikmann and Lecheler2022).

While there are laws that address the phenomenon (e.g., 18 U.S. Code § 35, the German Network Enforcement Act, the French Law on the fight against the manipulation of information), strengthened codes of practice (e.g., the European Commission’s Strengthened Code of Practice on Disinformation 2022), assignment of anti-disinformation attributions to governmental agencies (e.g., the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency), awareness campaigns, and implementation of disinformation detection and blocking algorithms or filters, the control of the phenomenon still poses significant challenges.

The control of the cyber disinformation phenomenon plays a significant role in the protection of democratic values and systems. This chapter argues that an essential role in the control of the phenomenon is played by the individual behavior of users and aims to identify factors that impact on Behavioral Intentions (BIs) and Cyber Hygiene Behavior (CHB), in the circumstances of cyber disinformation. The chapter integrates the Extended Theory of Planned Behavior (ETPB) and a Structural Equation Model. The research data were collected using a questionnaire. The model’s parameters were processed using the SmartPLS software.

The rest of this chapter is organized as follows. The next section outlines the phenomenon’s main attributes and explains how cyber-enabled means can threaten democratic values and institutions. The third section discusses aspects regarding structural equation modeling (SEM), applied to disinformation. The fourth section presents the conceptual model and the proposed hypotheses. Finally, the fifth section presents the model evaluation. The chapter concludes with implications of findings.

Cyber Disinformation Attributes

Disinformation” is a term difficult to define because the phenomenon is complex (Ó Fathaigh, Helberger, & Appelman, Reference Ó Fathaigh, Helberger and Appelman2021) and covers many forms, such as “fabrications, fakeness, falsity, lies, deception, misinformation, disinformation, propaganda, conspiracy theory, satire or just anything with which one disagrees” (Andersen & Søe, Reference Andersen and Søe2020, p. 6). Wardle and Derakhshan (Reference Wardle and Derakhshan2017), for instance, contrast “disinformation,” referred to as intentionally false or deceptive communication, with “misinformation,” understood as communications that may contain false claims, however, not intended to cause or inflict harm. The European Commission (Reference von der Leyen2020, p. 18) clearly distinguishes between misinformation, information influence operation, foreign interference in the information space, and disinformation, defining the latter as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm.”

Disinformation can be orchestrated by individuals or by organized groups (state or non-state) and involve various sources, such as regular citizens, political leaders or officials, attention-seeking trolls, profiteers, or propagandistic media (Watson, Reference Watson2021). Several factors were identified that favor the phenomenon, such as the tendency to believe unreliable statements (European Parliamentary Research Service, 2020) or people’s difficulties in identifying disinformation (Machete & Turpin, Reference Machete, Turpin, Hattingh, Matthee, Smuts, Pappas, 114Dwivedi and Mantymaki2020), the identity confirming problems, and deficiencies in platform filtering (Krafft & Donovan, Reference Krafft and Donovan2020).

According to Bontcheva et al. (Reference Bontcheva, Posetti, Teyssou, Meyer, Gregory, Hanot and Maynard2020, pp. 22–23), disinformation can take various formats, such as false claims or textual narratives; altered, fake, or decontextualized audio and/or video; and fake websites and manipulated datasets. Cyber disinformation campaigns can involve, for example, deceptive advertising, propaganda, or the dissemination of forged materials, such as videos, photographs, audios, documents (including, for instance, fake web pages or maps), created through image altering or airbrushing or cover-up, or by audio camouflage.

One of the characteristics of disinformation campaigns regards the existence of deceptive goals. According to Fallis (Reference Fallis, Floridi and Illari2014, pp. 142–146), the deceptive goals can be classified as regarding the accuracy of the content, the source believing the content, the content source identity, and the content accuracy implication. Disinformation can cause significant harm, as it has the real potential to confuse or manipulate people, suppress the truth or critical voices, generate distrust in democratic institutions or norms, and even disrupt democratic processes (Bontcheva et al., Reference Bontcheva, Posetti, Teyssou, Meyer, Gregory, Hanot and Maynard2020).

Social media is often regarded as a highly effective vector to promote political goals via disinformation campaigns (Aïmeur, Amri, & Brassard, Reference Aïmeur, Amri and Brassard2023). Twitter, now called “X,” for example, was used to spread misleading election memes and graphics, designed to demoralize opponent’s voters, and even deter them from exercising their right to vote, which went viral.Footnote 4 For an illustration of the importance attached to X as a disinformation vector, according to Statista research, the number of disinformation and pro-Russian posts in Poland via X amounted, in January 2022, to 25,910 disinformation and pro-Russian posts on X, increasing to 358,945 during the year (Statista, 2023).

In practice, disinformation campaigns employ an impressive array of tactics, including the impersonation of organizations or real people; creation of fake or misleading online personas or websites; creation of deepfakes or synthetic media; devise or amplification of certain theories; astroturfing and flooding; exploitation of gaps in information; manipulation of unsuspecting people; and spread of targeted content (Cybersecurity and Infrastructure Security Agency, 2022). Of particular concern, given their massive disinformation potential, are the deepfakes. In video and/or audio form, deepfakes nowadays are very realistic, allowing morphing attacks and the creation of unreal faces or voices, allowing personalized messages to individuals. Deepfakes can negatively affect the credibility of individuals, disrupt markets, facilitate frauds, manipulate public opinions, incite people to various forms of violence, and support extremist narratives, social unrest, or political polarization (Mattioli et al., Reference Mattioli, Malatras, Hunter, Biasibetti Penso, Bertram and Neubert2023; Trend Micro, 2020). Moreover, deepfakes undermine conversations about reality and can disrupt democratic politics (Chesney & Citron, Reference Chesney and Citron2019).

Personalization algorithms can be employed to facilitate the spreading of disinformation, potentially making it thrive on digital platforms (Borges & Gambarato, Reference Borges and Gambarato2019). The techniques or means employed in disinformation campaigns may also include, for instance, bots. These are increasingly difficult to distinguish from humans and can be effectively used to produce disinformation content, targeted at predetermined or general users (Edwards et al., Reference Edwards, Beattie, Edwards and Spence2016). For instance, bots are used to disseminate election disinformation (Knight Foundation, 2018) or disinformation regarding health issues (Benson, Reference Benson2020).

Artificial neural network (ANN) and deep learning methods can be used in disinformation campaigns, with unlawful or nefarious potential (Rana et al., Reference Rana, Nobi, Murali and Sung2022). Amplifiers, such as influential people or artificial intelligence tools, via, for instance, cross-platform coordination or the manipulation of engagement metrics, can be used to maximize engagement or the spread of disinformation through networks, to retweet or to follow X accounts or to share Facebook, now called Meta, posts (European Commission, 2023; Michaelis, Jafarian, & Biswas, Reference Michaelis, Jafarian and Biswas2022). Clickbait is another disinformation method used to attract online users to click on disinformation links (Collins et al., Reference Collins, Hoang, Nguyen and Hwang2021).

Structural Equation Modeling of Disinformation

The original approach to SEM assumed that the use of technical systems could be explained and predicted by user motivation directly influenced by external factors (i.e., the functionalities and capabilities of those technical systems) (Chuttur, Reference Chuttur2009; Davis, Reference Davis1985). The Technology Acceptance Model (TAM) theory was proposed to explore behavior and user acceptance of Information Communication Technology (ICT) based on the social psychology perspective. The TAM theory assumes that two factors determine users’ acceptance of a technology: (1) perceived usefulness and (2) perceived ease of use. The first refers to the user’s belief in the degree to which they think that using the technology enhances their job performance, productivity, or overall effectiveness. The second represents the user’s perception of how easy it is to apply the technology. In general, the acceptance of technology is a critical element and a necessary condition in the implementation of ICT in everyday life. An extensive literature survey was conducted utilizing the Scopus database, which covers 18,639 papers on TAM, published between 1964 and 2023.

Over decades, several theoretical models have been developed to understand the acceptance and the use of ICT. Researchers have been hesitant in selecting the appropriate theoretical model for the evaluation of the acceptance and usage of ICT. Recognition of the ICT needs and the ICT acceptance by individuals in business organizations is usually the beginning stage of any business activity and this understanding can be helpful to find the way of future implementation of the ICT.

In general, TAM models are estimated through the SEM, which is an approach for testing the hypotheses on relations among observable and latent variables (Sabi et al., Reference Sabi, Uzoka, Langmia and Njeh2016). The SEM is a statistical method applied in various fields of social sciences to estimate relationships among specified variables and verification of hypotheses if those relations are statistically dependable and valid. In this study, the SEM is realized through the Partial Least Square–Structural Equation Modeling (PLS_SEM), which represents the composite-based SEM method (Hair Jr. et al., Reference Hair, Hult, Ringle and Sarstedt2017). Partial Least Square (PLS) is a statistical method for estimation of relationships between independent variables and dependent variables.

For the past thirty years, the research community has been strongly involved in the identification of the factors that have an impact on technology acceptance. The theory of reasoned action (TRA) and the theory of planned behavior (TPB) were predecessors of the TAM (Marikyan & Papagiannidis, Reference Marikyan and Papagiannidis2021; Park & Park, Reference Park and Park2020). The TRA explains and predicts human behaviors considering their attitudes and Subjective Norms (SN). That theory assumes that individuals make rational decisions based on their attitudes and social norms.

The TPB is an extension of the TRA, also provided by Ajzen (Reference Ajzen2005). The TPB explains and predicts individual behavior based on human intentions, which are dependent on three factors, that is, attitude identified with personal beliefs, SN referring to the social pressure, and Perceived Behavioral Control (PBC) encompassing self-efficacy, perceived obstacles, and facilitators. The TAM had further modifications (i.e., TAM2, TAM3); however, researchers have utilized the Unified Theory of Acceptance and Use of Technology (UTAUT) model, which suggests that the actual use of technology is determined by BI. The perceived BI is dependent on four key constructs: performance expectancy, effort expectancy, social influence, and facilitating conditions. The effect of variables is moderated by age, gender, experience, and voluntariness of use (Venkatesh et al., Reference Venkatesh, Morris, Davis and Davis2003). Further, researchers have noticed the importance of factors reflecting the costs and benefits of behavior, as well as the context of use.

Venkatesh, Thong, and Xu (Reference Venkatesh, Thong and Xu2012) proposed the UTAUT2 model, which was developed to examine technology use in organizational settings. The authors of the UTAUT2 argue that the use of technology by individuals is determined by the following constructs: performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, price value, and habit, moderated by age, gender, and experience. In the UTAUT and UTAU2 models, BI has an impact on use behavior.

This study aims to identify factors having an impact on Behavioral Intentions (BIs) and Cyber Hygiene Behavior (CHB) in the circumstances of wide dissemination of disinformation in cyberspace. The BIs comprise an individual’s predispositions and willingness to behave in a specific way. This concept is included in various human behavior theories to analyze and predict human actions. CHB covers all practices that allow avoiding the risk to be a victim of cyberattacks and reduce cyber vulnerabilities.

This study considered state-of-the-art research work on disinformation attitudes. The Scopus literature survey revealed 4,526 publications on the “disinformation” keyword search. These papers were published between 1974 and 2023. However, there are only nineteen publications on the “disinformation” AND “structural equation modelling,” query, all between 2020 and 2023. Certain countries (i.e., China, Russia, and Turkey) have professionalized operations online to support social media campaigns and create an alternative informational space and effectively disseminate persuasive messages through symbols and emotions. Therefore, their action monitoring as well as online trolling are subjects of research (Alieva, Moffitt, & Carley, Reference Al-Shanfari, Yassin, Tabook, Ismail and Ismail2022; Uyheng, Moffitt, & Carley, Reference Uyheng, Moffitt and Carley2022).

Several researchers examine the phenomenon of disinformation as a threat in the sphere of cybersecurity (Caramancion et al., Reference Caramancion, Li, Dubois and Jung2022; Carrapico & Farrand, Reference Carrapico and Farrand2021). Hence, cybersecurity issues have been included as observable variables (i.e., Security (SEC) items) in the proposed survey. Arayankalam and Krishnan (Reference Arayankalam and Krishnan2021) formulated some hypotheses, which were positively verified, concerning disinformation, as follows:

  • Disinformation through social media is positively associated with online media development.

  • Online media are positively associated with its social media-induced offline violence.

  • The government control negatively moderates disinformation.

In social psychology, the TPB is one of the most influential behavioral models. The TPB links beliefs to behavior and assumes that the user’s behavior is determined by their intentions to perform that behavior. In the conceptual model proposed by Jia et al. (Reference Jia, Yu, Feng, Ning, Cao, Shang, Gao and Yu2022), for example, several factors have an impact on Behavioral Attitudes (BA), Subjective Norm (SN), and Perceived Behavioral Control (PBC). Further, the three variables determine BI. Moreover, Shirahada and Zhang (Reference Shirahada and Zhang2022) argue that TPB is used to predict and explain human intentions in a particular context. Intentions are influenced by SN, attitudes, and PBC. SN concern the expectations of other people and social pressures regarding desirable behavior. Attitude refers to evaluations of behavior, and PBC refers to the ease of performing a behavior.

Maheri, Rezapour, and Didarloo (Reference Maheri, Rezapour and Didarloo2022) argue that the Perceived Risk (PR) refers to subjective assessments of the risk of disinformation and its potential consequences. SN refer to respondents’ beliefs that significant others think they should or should not engage in a particular behavior. PBC concerns participants’ perceptions of their ability to do verification.

Romero-Galisteo et al. (Reference Romero-Galisteo, Gonzalez-Sanches, Galvez-Ruiz, Palomo-Carrion, Casuso-Holgado and Pinero-Pinto2022) consider that TPB explains the degree of correlation between variables, that is, entrepreneurial intention, perceived feasibility, and perceived desirability. Beyond that, Savari, Mombeni, and Izadi (Reference Savari, Mombeni and Izadi2022) develop an Extended Theory of Planned Behavior (ETPB) including the variables, that is, Descriptive Norms (DN), Moral Norms (MN), Habits (HA), and Justification (JU). In their theoretical framework, Attitude, SN, DN, and PBC have impacts on Intention, while PBC, MN, and variables such as HA and JU influence Behavior.

MN define a sense of inherent moral commitment, according to a value system. The concept of DN explains personal attitude about how much other people exhibit a certain behavior. The norms are introduced as people learn not only from their own experiences but also from analyses of behaviors of others. Comparable extension of TPB was provided by Khani Jeihooni et al. (Reference Khani Jeihooni, Layeghiasl, Yari and Rakhshani2022), who included attitude, among others, PBC, SN, and BIs in their survey.

According to Ababneh, Ahmed, and Dedousis (Reference Ababneh, Ahmed and Dedousis2022), the TPB proposes four major predictors of human behavior (i.e., attitude toward the behavior, SN, BI, and PBC). Ajzen (Reference Ajzen1991, p. 183) argues that attitude, norms, and control determine intentions and behavior. Cabeza-Ramirez et al. (Reference Cabeza-Ramirez, Sanchez-Canizares, Santos-Roldan and Fuentes-Garcia2022) notice that literature has rarely considered the possible perception of risk associated with desirable behavior. Similarly, the risk is included in the TPB model proposed by Zhang et al. (Reference Zhang, Shi, Chen and Zhang2022). Security is considered in the TPB model developed by Al-Shanfari et al. (Reference Al-Shanfari, Yassin, Tabook, Ismail and Ismail2022). Using the SEM method, they revealed factors having an impact on information security behavior adoption and employees’ training.

The Conceptual Model and Hypotheses

Considering the literature survey on latent variables included in the TPB models, this study noticed that there is no standardized approach: The models are formulated according to the preferences of researchers and some extensions are possible. Therefore, this study focuses on the application of the ETPB model; however, additional constructs are added, which are expected to present the context of behavior.

This study defines the BA as the degree to which a person believes that they can properly recognize disinformation. The BA influences the decision on whether to accept or reject the information. The BA reveals the extent to which a person believes that the use of information is needed and not harmful. BA refers to personal predispositions to act in a specific way, regarding a particular object, person, situation, concept, or technology.

Beyond that, this study proposes to include three types of norms, that is, MN, SN, and DN. MN result from personal internal beliefs not to tolerate disinformation. In general, MN are principles or rules that govern human behavior and establishing what is right or wrong in a social community. SN concern a personal perception of social pressure or influence to perform or not perform a particular action. SN result from personal motivation, normative beliefs, individuals’ knowledge, and impacts and experiences of third-party people, who may have influence on the questionnaire recipient. DN concern the perceptions that individuals have about behaviors exhibited by others in a community.

In this study, DN reveal the degree to which the recipient creates themselves as the image of a person who knows how to avoid disinformation. PBC means personal beliefs that an individual has capabilities, that is, competencies and resources, to control factors that may influence their behavior. In this study, PBC refers to the degree to which the recipient believes in their abilities to self-control and avoid disinformation. Therefore, this study proposes the following hypotheses:

H1: Behavioral Attitude (BA) has an impact on Behavioral Intention (BI).

H2: Moral Norms (MN) have a positive impact on Behavioral Intention (BI).

H3: Subjective Norms (SN) have a positive impact on Behavioral Intention (BI).

H4: Descriptive Norms (DN) have a positive impact on Behavioral Intention (BI).

H5: Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI).

H6: Perceived Behavioral Control (PBC) has a positive impact on Cyber Hygiene Behavior (CHB).

H7: Behavioral Intention (BI) has a positive impact on Cyber Hygiene Behavior (CHB).

Beyond variables considered in the TPB model, this study added other variables. Two of them, that is, HA and JU, have been introduced to the ETPB model by Savari, Mombeni, and Izadi (Reference Savari, Mombeni and Izadi2022). HA are repetitive actions, which are performed regularly or automatically in human lives. They can be positive (i.e., good habits, e.g., teeth cleaning) and negative (i.e., unhealthy habits, e.g., avoiding physical activities). In this paper, HA includes individual practices and routines applied by the recipients, particularly avoiding internet news. JU refers to collecting and revealing the reasons for a particular action, decision, or belief. JU means a personal explanation of regulations, policies, and administrative practices to avoid disinformation.

This study also considered the impact of variables combined with security, Anxiety (AN), and risk. Hence, the conceptual model covers the impact of three additional factors that may influence CHB, which covers practices and habits to maintain an elevated level of cybersecurity and protection of digital assets. It may also include prevention to maintain mental health and avoiding unreliable and untested, unchecked, and malicious information. Cyber AN is a degree to which a person hesitates to use internet information because of its harmfulness. PR is defined as a degree of risk recognition by an individual. SEC means level of knowledge on Information Technology (IT) tools to protect in case of a human agent or software attack. Hence, the next hypotheses are as follows:

H8: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).

H9: Habits (HA) have an impact on Cyber Hygiene Behavior (CHB).

H10: Perceived Risks (PRs) have an impact on Cyber Hygiene Behavior (CHB).

H11: Security (SEC) has an impact on Cyber Hygiene Behavior (CHB).

H12: Anxiety (AN) has an impact on Cyber Hygiene Behavior (CHB).

Figure 5.1 includes the conceptual model of variables having an impact on CHB. In this theoretical framework, relationships among constructs, that is, latent variables, as well as between constructs and their assigned indicators, that is, items or observable variables, are shown with arrows.

Content of image described in text.

Figure 5.1 Conceptual model.

Observable Indicators for Cyber Hygiene Behavior Model

For the past thirty years, ICT, in general, and the internet have played a significant role in communications among people in all sectors of life (i.e., education, administration, business, health care, and agriculture). The benefits of ICT do overcome risks and wastes caused by disinformation. To evaluate young peoples’ behavior and recognize factors having an impact on their BIs and actions, the TPB model has been specified and estimated. The literature survey on TAM, UTAUT, and the TPB models led to observations that researchers focus on the latent variable’s identification. However, the specification of observable items, such as indicators, should also be discussed.

Considering the items identified in literature and proposed by other researchers, this study items are included in Table 5.1.

Table 5.1
Items included in the survey
Table 5.1a
A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The first five variables, their corresponding items, mean R O and mean P L are listed. See long description.
Table 5.1Table 5.1aLong description

A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The latent variables are anxiety, perceived risk, security, moral norms, behavioral attitude, subjective norms, descriptive norms, perceived behavioral control, behavioral intention, habits, justification, and cyber hygiene behavior. There are four columns namely, latent variables, item, mean R O, and mean P L. Each latent variable has a varying number of subcategories. The following are the row-wise details for the first five variables with their respective data categories in the columns from left to right.

Anxiety, A N, A N 1: I feel apprehensive about finding fake news on the internet, 4.115, 3.245.

Anxiety, A N, A N 2: I hesitate to use social media for fear of finding fake news, 2.285, 1.685.

Anxiety, A N, A N 3: Fake news are threats to democratic values and democratic institutions, 5.394, 4.925.

Perceived Risk, P R, P R 1: Buying products promoted by an unreliable source adds to the uncertainty about the results, 5.782, 5.780.

Perceived Risk, P R, P R 2: Disinformation destroys a positive image and reputation, 5.842, 5.890.

Perceived Risk, P R, P R 3: I accept the risk to enable learning from uncertain sources, 3.194, 5.080.

Perceived Risk, P R, P R 4: I think there is no risk in using social media to meet new people, 2.291, 2.900.

Security, S E C, S E C 1: Anti-spamming software allows me to avoid fake news, 3.982, 4.070.

Security, S E C, S E C 2: Internet service provider warns me about fake news, 2.327, 3.500.

Security, S E C, S E C 3: I pay consideration to website artifacts, i.e., Padlock or https, 4.024, 4.755.

Moral Norms, M N, M N 1: Avoiding fake news dissemination is a matter of conscience for me, 5.600, 4.040.

Moral Norms, M N, M N 2: I feel compelled by my conscience to punish fake news providers, 4.279, 5.040.

Moral Norms, M N, M N 3: I feel uncomfortable when I observe that other people tolerate fake news dissemination, 5.497, 4.660.

Moral Norms, M N, M N 4: I feel responsible for the true information inserted by me on the internet, 6.115, 5.110.

Behavioral Attitude, B A, B A 1: I like to be engaged in the activity for fake news recognition, 4.121, 2.830.

Behavioral Attitude, B A, B A 2: I believe that constant monitoring of COVID-19 news has a positive impact on my mental health, 2.521, 3.410.

Behavioral Attitude, B A, B A 3: I have enough responsibility not to read fake news, 5.370, 4.995.

Behavioral Attitude, B A, B A 4: I think it is better to verify the information provenance, 6.467, 5.825.

Behavioral Attitude, B A, B A 5: I think that unreliable source of data may provide fake news, 5.697, 4.660.

Behavioral Attitude, B A, B A 6: I think that losers and crazy people provide fake news on the internet, 3.618, 3.940.

Behavioral Attitude, B A, B A 7: I think fake news is like a joke, 2.455, 3.685.

Table 5.1b
A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The following five variables, their corresponding items, mean R O and mean P L are listed. See long description.
Table 5.1Table 5.1bLong description

Table 5.1 continues with the respective data entries of items, mean R O, and mean P L from left to right, for the following five latent variables:

Subjective Norms, S N, S N 1: Some of my colleagues have been deceived by fake news, 5.091, 4.620.

Subjective Norms, S N, S N 2: Public opinion will affect my choice of the internet news, 3.393, 4.300.

Subjective Norms, S N, S N 3: People whom I work with help each other to recognize fake news, 4.327, 4.360.

Subjective Norms, S N, S N 4: People whom I trust warn me and explain to me the fake news, 5.164, 4.930.

Descriptive Norms, D N, D N 1: I think most of my friends know how to avoid fake news, 4.709, 4.875.

Descriptive Norms, D N, D N 2: I am sure that people around me do not read unreliable news, 3.273, 3.895.

Descriptive Norms, D N, D N 3: I believe that most of my family thinks that reading unreliable news is unreasonable and wrong, 4.685, 5.040.

Descriptive Norms, D N, D N 4: Reading fake news is disgusting to the people around me, 4.006, 4.120.

Perceived Behavioral Control, P B C, P B C 1: My technical ability is sufficient to avoid disinformation, 5.079, 5.270.

Perceived Behavioral Control, P B C, P B C 2: I purposefully avoid nonverified information, 5.364, 5.005.

Perceived Behavioral Control, P B C, P B C 3: I know how to avoid fake news, 5.267, 5.335.

Perceived Behavioral Control, P B C, P B C 4: I think I have good self-control, 5.818, 5.175.

Behavioral Intention, B I, B I 1: I would like to know more about the possibilities of verifying internet information, 6.236, 4.945.

Behavioral Intention, B I, B I 2: I will recommend my friends or relatives to verify information from uncertain or unknown sources, 6.073, 4.890.

Behavioral Intention, B I, B I 3: Post COVID-19, I carefully check information on it, 5.442, 4.435.

Behavioral Intention, B I, B I 4: I will take good care of myself, particularly when I am browsing unsafe portals, 5.933, 5.525.

Behavioral Intention, B I, B I 5: I am still looking for news that allows me to verify the information received earlier, 5.418, 4.655.

Habits, H A, H A 1: I do not think about the fake news on the internet because I do not read internet news, 2.539, 3.450.

Habits, H A, H A 2: I habitually always pay attention to reliability of news and always check the source of information, 5.273, 4.910.

Habits, H A, H A 3: I always read reliable information on the internet because it has become a habit for me, 4.836, 4.570.

Table 5.1c
A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The remaining two variables, their corresponding items, mean R O and mean P L are listed. See long description.
Table 5.1Table 5.1cLong description

Table 5.1 continues with the respective data entries of items, mean R O, and mean P L from left to right, for the remaining two latent variables:

Justification, J U, J U 1: Due to the fake news dissemination, people do not trust each other and the internet is not a reliable source of information, 4.685, 4.855.

Justification, J U, J U 2: Governmental activities to punish and reduce fake news are small and hard to notice, 5.685, 5.050.

Justification, J U, J U 3: The habit of reducing fake news on the internet is usually forgotten when people need to receive important information, example on COVID-19 risks, 5.333, 4.550.

Justification, J U, J U 4: Increasing the punishment for fake news is often overlooked because there is so much everyday news and people do not remember nor recognize what is false or true, 5.497, 4.985.

Cyber Hygiene Behavior, C H B, C H B 1: I avoid constantly studying the news on gossip portals, 5.358, 5.135.

Cyber Hygiene Behavior, C H B, C H B 2: I will not encourage others to study the gossip portal news, 5.430, 5.545.

Cyber Hygiene Behavior, C H B, C H B 3: I immediately remove emails from unknown senders, 4.873, 4.610.

Cyber Hygiene Behavior, C H B, C H B 4: I do not click on links or attachments from uncollected emails or texts, 6.515, 5.970.

The research data were collected using a questionnaire and analyzed using SEM. The survey respondents were students at the University of Economics in Katowice (Poland) and the Babeş-Bolyai University (Romania). The questionnaires were distributed to bachelor, master, and doctoral-level students. The responses to the questionnaire were voluntary and anonymized. This research collected 200 questionnaires from the University of Economics in Katowice and 165 questionnaires from the Babeş-Bolyai University.

The students were asked to express their degree of agreement or disagreement with the statements in Table 5.1 by marking the answers on the seven-grade Likert scale, considering the following meanings: 1 – absolutely disagree; 2 – disagree; 3 – rather disagree; 4 – irrelevant; 5 – rather agree; 6 – agree; and 7 – definitely agree.

Table 5.1 contains the items included in the survey and presents a list of questions with acronyms and a set of latent variables. The last column in Table 5.1 includes average (Mean) values of these research indicators. The Pearson correlation rate for the two last columns in Table 5.1 is 0.7854; hence, authors conclude on high comparability of responses of recipients from these two populations under research.

The TPB Model Evaluation

The presented conceptual model (see Figure 5.1) consists just of items connected to the variables. SmartPLS3 was used to calculate the model (Ringle, Hair, & Sarstedt, Reference Ringle, Hair and Sarstedt2014). In the first run, the model was calculated with the PLS algorithm. The number of iterations was set to 1,000 and the stop criterion to 10−X with selected 7. Then the model was calculated with the Bootstrap algorithm, in which the number of samples was set to 5,000 for the full version with bias-corrected and accelerated in two-tailed distribution. The significant level was equal to 0.05.

The conceptual model (Figure 5.1) was estimated twice. Firstly, data were gathered in Poland, then in Romania. The reliability of the variables was evaluated using Cronbach’s Alpha and Composite Reliability (CR). The results for reliability and validity are presented for the overall sample. Cronbach’s Alpha is a way of assessing reliability by comparing the amount of shared variance, or covariance, among the items in a psychological test or questionnaire (Collins, Reference Collins2007). CR is an “indicator of the shared variance among the observed variables used as an indicator of a latent construct” (Fornell & Larcker, Reference Fornell and Larcker1981). In psychology, all of Cronbach’s Alpha and CR values are recommended to be higher than 0.600. Cronbach’s Alpha values of 0.60 to 0.70 are acceptable in exploratory research, while values between 0.70 and 0.90 are regarded as satisfactory (Nunally & Bernstein, Reference Nunnally and Bernstein1994). The Average Variance Extracted (AVE) and CR values are to be higher or close to 0.500 and 0.700, respectively, which corroborates convergent validity. Fornell and Larcker (Reference Fornell and Larcker1981) said that if AVE is less than 0.5, but CR is higher than 0.6, the validity of the construct is still adequate. The results for reliability and validity for the overall sample of 200 records from Poland are included in Tables 5.2 and 5.3. Unfortunately, in the first estimation, the chosen observed variables have not explained well the latent variables; therefore, model fitting was necessary. The preliminary conceptual model as unreliable has been changed, and Figure 5.2 includes the secondary estimated model covering the following hypotheses:

Table 5.2Construct reliability and validity – preliminary model estimated (sample size: 200 records from Poland)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
AN0.4540.5530.6960.447
BA0.3730.4850.6180.234
BI0.6780.7040.7950.442
CHB0.6090.6170.7750.468
DN0.5920.5430.6730.389
HA0.2720.5210.5080.404
JU0.6470.6780.7790.473
MN0.5990.6970.7520.443
PBC0.6850.700.8060.512
PR−0.1150.2670.0050.321
SEC0.3250.2260.5330.356
SN0.4100.7170.6380.377
Table 5.3Construct reliability and validity – secondary model estimated (sample size: 200 records from Poland)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
BI0.6780.7030.7950.442
CHB0.6090.6260.7740.467
DN0.5920.5400.6650.386
JU0.6470.6770.7790.473
MN0.5990.6950.7510.443
PBC0.6850.7010.8060.512
Content of image described in text.

Figure 5.2 The final model with estimated coefficients (sample size: 200 records from Poland).

H1: Behavioral Intention (BI) has a positive impact on Cyber Hygiene Behavior (CHB).

H2: Descriptive Norms (DN) have a positive impact on Behavioral Intention (BI).

H3: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).

H4: Moral Norms (MN) have a positive impact on Behavioral Intention (BI).

H5: Perceived Behavioral Control has a positive impact on Behavioral Intention (BI).

H6: Perceived Behavioral Control has a positive impact on Cyber Hygiene Behavior (CHB).

Path coefficients and R2 constructs are included in Table 5.4.

Table 5.4PLS Algorithm R2 and Path Coefficients (sample size: 200 records from Poland)
A correlation matrix table for 6 constructs with the P L S algorithm, R square values. See long description.
Table 5.4Long description

A table gives the relationships between six constructs represented by the following codes, B I, C H B, D N, J U, M N, and P B C, along with R 2 values.

  • The only R square values given are for B I and C H B, and they are 0.350, and 0.246 respectively.

  • B I correlates with D N, M N, and P B C with path coefficients, 0.376. 0.142, 0.290. and 0.376, respectively.

  • C H B correlates with B I and J U with path coefficients, 0.169 and 0.304, respectively.

The goodness of the model is estimated by the strength of each structural path, determined by the R2 value for the dependent variables (Jankelová, Joniaková, & Skorková, Reference Jankelová, Joniaková and Skorková2021). Generally, R2 is a statistical measure of the goodness of the fit of a regression model. For the dependent variables, the R2 value should be equal to or over 0.125 (Falk & Miller, Reference Falk and Miller1992). The results in Table 5.4 show that all R2 values are over 0.1.

The R2 ranges from 0 to 1, higher values indicating stronger explanatory power. As a general guideline, R2 values of 0.75, 0.50, and 0.25 can be considered substantial, moderate, and weak, respectively, in many social science disciplines (Hair Jr. et al., Reference Hair, Hult, Ringle, Sarstedt, Danks and Ray2021). But acceptable R2 values are based on the research context, and in some disciplines, an R2 value as low as 0.10 is considered satisfactory, that is, for the large sample size research, it is statistically significant, but substantively meaningless (Falk & Miller, Reference Falk and Miller1992).

Table 5.5 covers the Bootstrapping Path Coefficients values for the final model as well as the decisions on the proposed hypotheses’ acceptance or rejection.

Table 5.5Bootstrapping Path Coefficients for final model (sample size: 200 records from Poland)
Hypothesis NoHypothesis (impact direction à)Original sampleSample meanStandard deviationT-statisticsP valuesDecision
H1BI à CHB0.1690.1740.0782.1750.030Accepted
H2DN à BI0.1420.1560.0851.6730.094Rejected
H3JU à CHB0.3040.3160.0684.4530.000Accepted
H4MN à BI0.2900.2930.0654.4760.000Accepted
H5PBC à BI0.3760.3780.0636.0190.000Accepted
H6PBC à CHB0.1500.1500.0751.9940.046Accepted

The results of the tests indicate that the proposed constructs (i.e., JU, MN, PBC) have a weak impact on the intention and behavior of students (expressed as BI, CHB) to avoid the disinformation. If a P value is below a certain threshold, then the corresponding hypothesis is assumed to be supported. The threshold is usually 0.05 (Kock, Reference Kock2014). Therefore, in this research, hypotheses H1, H3, H4, H5, and H6 are supported, but hypothesis H2 is rejected. This means that: (1) Behavioral Intention (BI) impacts on Cyber Hygiene Behavior (CHB); (2) Justification (JU) has a positive impact on Cyber Hygiene Behavior; (3) Moral Norms (MN) have a weak impact on Behavioral Intention (BI); (4) Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI); and (5) Personal Behavioral Control (PBC) has an impact on Cyber Hygiene Behavior (CHB).

Next, the study estimated the conceptual model considering data from Romania. However, for these data also, the reliability and validity measures have low values (Table 5.6), and authors eliminated some variables from that model.

Table 5.6Construct reliability and validity – preliminary model estimated (sample size: 165 records from Romania)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
AN0.4530.4610.7330.480
BA0.2940.5530.4160.244
BI0.7070.7210.8080.459
CHB0.4870.5050.7030.381
DN0.6120.6240.7740.463
HA0.0450.7110.5080.593
JU0.6400.6420.7860.480
MN0.7050.7260.8190.535
PBC0.6741.1040.7110.400
PR0.0190.3830.0910.304
SEC0.459−0.2540.1770.309
SN0.4700.5370.6740.391

Since the chosen observed variables have not explained well the latent variables, model fitting was necessary. The preliminary conceptual model as unreliable has been changed, and Figure 5.3 includes the secondary estimated model covering the following hypotheses:

Content of image described in text.

Figure 5.3 The final model with estimated coefficients (sample size: 165 records from Romania).

H1: Behavioral Intention (BI) has a positive impact on Cyber Hygiene Behavior (CHB).

H2: Descriptive Norms (DN) have a positive impact on Behavioral Intention (BI).

H3: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).

H4: Moral Norms (MN) have a positive impact on Behavioral Intention (BI).

H5: Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI).

H6: Perceived Behavioral Control (PBC) has a positive impact on Cyber Hygiene Behavior (CHB).

The same reliability and validity verification was done for the Romania model (Table 5.7).

Table 5.7Construct reliability and validity – secondary model estimated (sample size: 165 records from Romania)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
BI0.7070.7260.8080.458
CHB0.5430.5590.8120.685
DN0.6120.6220.7740.463
JU0.6400.6770.7830.476
MN0.7050.7260.8190.535
PBC0.6741.2390.6880.382

Path Coefficients and R2 constructs are included in Table 5.8.

Table 5.8PLS Algorithm R2 and Path Coefficients (sample size: 165 records from Romania)
A correlation matrix table with R square values and relationships between constructs B I, C H B, D N, J U, M N, and P B C. See long description.
Table 5.8Long description

A table gives the relationships between the 6 constructs and the corresponding R square values. Below are the details.

  • The R square values are given only for B I and C H B they are 0.376, and 0.333, respectively.

  • B I correlates with D N, M N, and P B C with path coefficients negative 0.043, 0.509, and 0.233, respectively.

  • C H B correlates with B I, J U, and P B C with path coefficients 0.462, 0.050, and 0.185, respectively.

Table 5.9 covers the Bootstrapping Path Coefficients values for the final model as well as the decisions on the proposed hypotheses acceptance or rejection.

Table 5.9Bootstrapping Path Coefficients for the final model (sample size: 165 records from Romania)
Hypothesis NoHypothesis (impact direction à)Original sampleSample meanStandard deviationT-statisticsP valuesDecision
H1BI à CHB0.4620.4500.1014.5780.000Accepted
H2DN à BI−0.043−0.0070.0980.4340.665Rejected
H3JU à CHB0.0500.0760.0850.5890.556Rejected
H4MN à BI0.5090.5190.0717.1310.000Accepted
H5PBC à BI0.2330.2320.0902.5910.010Accepted
H6PBC à CHB0.1850.1910.0772.4130.016Accepted

The results of the tests indicate that the proposed constructs (i.e., MN, PBC) have a weak or moderate impact on the intention and behavior of students (expressed as BI, CHB) to avoid the disinformation. In this research, the threshold of the P value is also 0.05. Therefore, in this research, hypotheses H1, H4, H5, and H6 are supported, but hypotheses H2 and H3 are rejected. This means that: (1) Behavioral Intention (BI) impacts on Cyber Hygiene Behavior (CHB); (2) Moral Norms (MN) have a moderate impact on Behavioral Intention (BI); (3) Perceived Behavioral Control (PBC) has a weak positive impact on Behavioral Intention (BI); (4) Perceived Behavioral Control (PBC) has a weak positive impact on Cyber Hygiene Behavior (CHB).

Conclusion

Cyber disinformation is a complex and concerning phenomenon. Successful disinformation campaigns can have a significant negative effect on democratic values and institutions. Defending democracy in the digital age requires a complex approach. The individual behavior of users can influence the spread and effects of the phenomenon.

This chapter argued that users’ behavior plays an essential role in this phenomenon and aimed to identify factors that impact on users’ BIs and CHB. The chapter integrated the ETPB and a Structural Equation Model, realized through PLS–SEM, applied to the cyber disinformation phenomenon. The analysis of the self-assessment survey on disinformation risk perception and control revealed that responses are highly similar, with a correlation rate of 0.7854. The research revealed the applicability of the TPB model and found that MN and PBC have an impact on BI and CHB.

The findings of this chapter provide valuable insights that can be used to improve the overall responses to the phenomenon, such as policies, programs, and clinics, and to elaborate educational materials. To effectively address the phenomenon’s relevant vectors, tactics, or methods, there is a clear need for a complex strategy, with multiple components, including research, to better understand the phenomenon’s attributes and the behavior of users; frequent risk assessments; increased empowerment of people to detect and report disinformation; improved fact-checking procedures; enhanced international anti-disinformation enforcement and cooperation; technical assistance programs; better defined responsibility for secondary liability; awareness raising and education programs, with a view to improve the critical thinking abilities of people.

Footnotes

2 Hacking Elections Contemporary Developments in Historical Perspective

1 The New Jersey Constitution, adopted on July 2, 1776, uses the gender-neutral pronoun “they” and doesn’t include racial categories in its election law.

2 The 13th Amendment outlawed slavery and the 14th Amendment gave citizenship to all persons “born or naturalized in the United States,” including formerly enslaved people, and provided all citizens with “equal protection under the laws.”

3 Guinn v. United States, 238 U.S. 347 (Supreme Court of the United States, June 21, 1915).

4 Harper v. Virginia Board of Elections, 383 U.S. 663 (Supreme Court of the United States, March 24, 1966).

5 Specifically, the 1870 Act, referred to variously as the Enforcement Act or First Ku Klux Klan Act, made it a federal criminal offense to prevent Black Americans from voting or threatening violence or other retaliation for voting, such as loss of employment or eviction from their home.

6 The 1957 amendment authorized the U.S. Attorney General to seek federal court injunctions to safeguard the voting rights of Black Americans. The 1960 amendment bolstered court enforcement of voting rights and mandated preservation of voting records. The 1964 amendment required desegregation of voting places, among other provisions.

7 Bush v. Gore, 531 U.S. 98 (United States Supreme Court, December 11, 2000).

8 The VRA was reauthorized in 1970, 1975, and 1982, with the most recent reauthorization occurring in 2006 when President George W. Bush signed legislation to extend the VRA for an additional twenty-five years. In 2009, Congress enacted the Military and Overseas Voter Empowerment Act (“MOVE Act”), which amended UOCAVA to establish new voter registration and absentee ballot requirements for federal elections to further facilitate the ability of Americans living overseas to vote.

9 United States v. Price, 383 U.S. 787 (Supreme Court of the United States, March 28, 1966).

10 Civil Rights Act, Pub. L. No. 88-352, 78 Stat. 241 (1964).

11 John Lewis was elected to Congress in 1986, representing Georgia’s fifth congressional district spanning most of Atlanta, Georgia, where he served as lawmaker until he passed away on July 17, 2020.

12 Shelby County v. Holder, 570 U.S. 529 (Supreme Court of the United States, February 27, 2013).

13 Brnovich v. Democratic National Committee, 594 U.S. _ (2021) (Supreme Court of the United States, July 2, 2021).

14 Bush v. Gore, 531 U.S. 98 (Supreme Court of the United States, December 12, 2000).

15 See Footnote ibid., p. 104.

16 Harris wrote a book documenting her discovery and examination of the source code, which Diebold had previously refused to make public (Harris, Reference Harris2004). A Government Accountability Office (GAO) report from September 2005 reviews these and other studies documenting security problems in digital voting systems (Government Accountability Office, 2005).

17 The goal of the study was “to identify and document vulnerabilities, if any, to tampering or error that could cause incorrect recording, tabulation, tallying or reporting of votes or that could alter critical election data such as election definition or system audit data.”

18 Illinois has admitted to being one of them, and Arizona is believed to be the other.

19 The public record about the commission, including statements from President Trump and the commission’s leadership, Vice President Mike Pence and Kansas politician Kris Kobach, is devoid of any mention of cyber threats.

20 Examples include the Secure Elections Act of 2017, the Election Security Act of 2018, the Protecting American Voting Rights Act of 2019, the Secure Elections Act of 2020, and the John Lewis Voting Rights Advancement Act of 2020.

21 Relatedly, the legislation also appropriated $300 million to the FBI for combating Russian cyber operations against the United States.

22 Indictment, United States v. Internet Research Agency et al., 1:18-CR-32 (DLF) (D.D.C. Feb. 16, 2018).

3 From Free Speech to False Speech Analyzing Foreign Online Disinformation Campaigns Targeting Democracies

This chapter investigates the extent to which Russian and Chinese online disinformation campaigns attempt to influence democracies to further their strategic aims. It is vital to first locate the challenge of disinformation within the international security landscape and today’s “post-truth” world. While disinformation – intentionally false information – is not new, the internet presents unique challenges in detecting, responding to, and curbing such content.

4 Cyber Challenges to Democracy and Soft Power’s Dark Side

I presented an early draft of this chapter at the 2023 International Studies Association in Montreal. My thanks to fellow panelists Christopher Ankerson and John Bonilla, as well as the audience members, including Greg Olsen, for their critical engagement and helpful observations. Scott Shackelford and Frederick Douzet provided extremely useful critique during a workshop for the present volume. I am also grateful to Nandini Dey, Thomas Risse, Richard Katz, Sebastian Schmidt, Susan T. Jackson, and Patrick Thaddeus Jackson for clarifying questions and constructive criticism on my larger project on malign soft power and information flows.

1 The term “soft power,” as the power of attraction rather than coercion, was first coined by Joseph Nye (Reference Nye1990a, Reference Nye1990b). The term encompasses public diplomacy efforts (Hayden, Reference Hayden2017; Nye, Reference Nye2008) as well as the efforts of other actors who are not working at the direction of states (Zaharna, Reference Zaharna and Zaharna2010). This chapter only briefly mentions antidemocratic hard power. Hacking voting machines to corrupt their data files would be an example of hard power and coercion rather than soft power and attraction. In cyberspace, as in the physical world, hard power actions often happen alongside soft power ones.

2 Nye himself notes the advent of twenty-first century information technology “means … that foreign policy will not be the sole province of governments. Both individuals and private organizations, here and abroad, will be empowered to play direct roles in world politics” (Nye, Reference Nye2002, p. 61). The context here is that Nye foresees greater volatility in the kinds of soft power efforts that circulate via information technologies. Malign efforts are not excluded from the future as he imagines it in 2002, but his emphasis is on the positive aspects of soft power: advocacy for human rights and actions taken in the global public interest.

3 I use the gerund form, wielding, intentionally (albeit awkwardly) to indicate the action as it unfolds. Unknown actors wield power by controlling flows of information.

4 Cammaerts also identifies the commodification of the internet, unequal power leading to censorship by states and intimidation by employers, and the outsized ability of elites to further their own interests.

5 I do not mean to imply that efforts to demand accountability by identifying actors are worthless or counterproductive. Such efforts may be easier or more necessary in cases of hard power attacks. Governments and nongovernmental organizations do attempt to identify who is acting when malicious actions are detected (Mueller et al., Reference Mueller, Grindal, Kuerbis and Badiei2019; The RAND Corporation, 2019). Herbert Lin (2016) discusses how to identify those who are responsible for hard power cyberattacks. He separates aspects of attribution into the machine or machines that originated the malicious action, the human who took action, and the “ultimate responsible party.” Such efforts are easier, I believe, in the context of specific hard power, such as ransomware attacks. The ubiquity of malign soft power makes tracking down culprits, including the misguided people who do not understand that they are sharing misinformation, a Sisyphean task.

6 Spoofing can be considered a form of pseudonymity.

7 Internet pioneers would have preferred privatization without the United States maintaining a good deal of control (Mueller, Reference Mueller1999).

8 For example, Iranian-backed hackers attacked an Israeli-made industrial control device used by water authorities in a number of United States systems in October of 2023 (Bajak & Levy, Reference Bajak and Levy2023).

9 A full-fledged review of the voluminous literature on soft power in International Relations and other fields of scholarship is beyond the scope of this chapter, but some additional IR works include: Gilboa (Reference Gilboa2008), Henne (Reference Henne2022), Wilson (Reference Wilson2008), Goldsmith and Horiuchi (Reference Goldsmith and Horiuchi2012), Goldthau and Sitter (Reference Goldthau and Sitter2015), Nye (Reference Nye2021), Keating and Kaczmarska (Reference Keating and Kaczmarska2019), and Tella (Reference Tella2023).

10 Nye’s quote from this early article refers to “countries” rather than the more general “actors” I have substituted here. Most assessments of soft power, including works by Nye (e.g., Reference Nye, Melissen and Wang2019), also hold that non-state actors, such as small groups and individuals, can wield soft power. Soft power is a strategy micro-actors can use to similarly endear themselves to other actors and become influential in world politics. Soft power can be generated by using resources that are relatively cheap and accessible, a much lower bar than micro-actors would face if they wished to deploy hard power. See also Zahran and Ramos (Reference Zahran and Ramos2010) and Nimmo et al. (Reference Nimmo, Eib and Tamora2019).

11 Some efforts at wielding soft power do seem quite positive in effect, and it is hard to find fault in some cultural exchanges that promote democracy (e.g., Atkinson, Reference Atkinson2010).

12 For example, patriotic Russians who support Putin and are suspicious of too much freedom and supporters of Western liberal democracy will certainly have different views of what is harmful.

13 I also acknowledge the inescapable tension between democracy’s expansive protections of freedom of speech and the dangers of hateful speech and speech that spreads lies and bullshit (Frankfurt, Reference Frankfurt2005).

14 Soft power can be adversarial without being malign, though the determination depends on standpoint and context. State actors routinely use soft power actions to portray themselves as more attractive in an effort to bolster their own image and undercut adversary countries’ relations with others. For example, China’s Belt and Road Initiative has adversarial elements to it, but in practical terms, it seems to fall short of the threshold for being malign.

15 Common sense tells us that actors wield power through material means, through informational means, or through a combination. Examples are bullets (material), brainwashing (informational), and Foucault’s panopticon (combination). This chapter focuses on wielding power with information, but it is also possible to translate the material into the informational for the purpose of analysis. The bullet informs the wounded body of injury and triggers signals of pain (Marlin-Bennett, Reference Marlin-Bennett2013).

16 Other modes are possible. Modes often work in combination.

17 I am referring here to a plain language understanding of “seduction” and “seduce” such as one finds in an English language dictionary (e.g., “Seduce, v.,” 2023). For malign soft power, the OED Online’s second definition of seduce – “To lead (a person) astray in action, conduct, or belief; to draw (a person) away from the right or intended course of action to or into a wrong or misguided one; to entice or beguile (a person) to do something wrong, foolish, or unintended – is particularly apt and helpfully covers both sexual and nonsexual kinds of conduct. For a more complex rendering, see Felman, Reference Felman2003. Baudrillard’s and Freud’s treatments of seduction are not useful in this context. Laura Sjoberg (Reference Sjoberg2018) cautions that news accounts of women around the Islamic State frequently portray them as nonagentic and subject to being duped because of their inherent feminine characteristics. Her argument is a compelling reminder that all secondhand accounts are imperfect retellings of actual motivations, knowledge, and desires.

18 Likes and reposts are generally indications that a message has been received with approval – in other words, that the soft power worked. However, that is not always the case. Likes and reposts can be done by bots or cyborgs. Also, people who disapprove sometimes repost in order to critique it.

19 Boichak et al. (Reference Boichak, Hemsley, Jackson, Tromble and Tanupabrungsun2021), citing Yang and Counts (Reference Yang and Counts2010), capture velocity in their study of the speed, scale, and range. “[S]peed […] reflects temporality of information events; scale […] speaks to the visibility of a message on a platform through popularity metrics, such as ‘likes’and ‘retweets’; and range […] denotes the depth of diffusion once the message gets propagated through user networks, reaching new audiences in the process” (Yang & Counts, Reference Yang and Counts2010, pp. 356–357).

20 Some studies (e.g., Brown et al., Reference Brown, Bisbee, Lai, Bonneau, Nagler and Tucker2022; Chen et al., Reference Chen, Nyhan, Reifler, Robertson and Wilson2023) have found that YouTube algorithms do not radicalize users by sending them “down the rabbit hole” but that YouTube does make extremist content available. Other studies do find that algorithms can serve to radicalize. See Whittaker et al. (Reference Whittaker, Looney, Reed and Votta2021).

21 Antoinette Verhage (Reference Verhage2009, p. 9), writing on similar conundrums of how to find the right balance between governance to limit harm and not impinging on rights, includes this quote.

5 Cyber Disinformation Risk Perception and Control Integration of the Extended Theory of Planned Behavior and a Structural Equation Model

1 District of Columbia v. Exxon Mobil Corp., Civil Action No. 20-1932 (TJK) (D.C. 2022).

2 RJ Reynolds Tobacco Co. v. Rouse, 307 So.3d 89 (Fla. Dist. Ct. App. 2020).

3 Nat’l Coal. on Black Civic Participation v. Wohl, 498 F. Supp. 3d 457 (S.D.N.Y. 2020).

4 United States v. Mackey, No. 21-CR-80 (AMD)(SB) (E.D.N.Y. Oct. 17, 2023).

References

References

Alexander, A., & Fields, G. (2022, December 30). Black support for GOP ticked up in this year’s midterms. AP News. https://apnews.com/article/2022-midterm-elections-brian-p-kemp-stacey-abrams-politics-us-democratic-party-53d31c9c8a87231d00784b6effa8d59eGoogle Scholar
Associated Press. (n.d.). Counting the vote. Associated Press. https://ap.org/about/our-role-in-elections/counting-the-voteGoogle Scholar
Bannet, J., Price, D., Rudys, A., & Singer, J. (2004, February). Hack-a-vote: Security issues with electronic voting systems. IEEE Security and Privacy Magazine, 2, 3237.10.1109/MSECP.2004.1264851CrossRefGoogle Scholar
Barnes, J. E. (2018, October 23). U.S. begins first cyberoperation against Russia aimed at protecting elections. The New York Times. https://nytimes.com/2018/10/23/us/politics/russian-hacking-usa-cyber-command.htmlGoogle Scholar
Barnes, J. E. (2019, February 26). Cyber command operation took down Russian troll farm for midterm elections. The New York Times. https://nytimes.com/2019/02/26/us/politics/us-cyber-command-russia.html?searchResultPosition=1Google Scholar
Baum, S., Cea, B., & Cohen, A. (2021, June 30). The 26th amendment turns 50 amid renewed voter suppression. Brennan Center. https://brennancenter.org/our-work/analysis-opinion/26th-amendment-turns-50-amid-renewed-voter-suppressionGoogle Scholar
Bishop, M. (n.d.). Overview of red team reports. Voting Systems. https://votingsystems.cdn.sos.ca.gov/oversight/ttbr/red-overview.pdfGoogle Scholar
Bowen, D. (2007, August 3). Top-to-bottom review. California Secretary of State. https://sos.ca.gov/elections/ovsta/frequently-requested-information/top-bottom-reviewGoogle Scholar
Cassidy, C. A. (2021, December 14). Far too little vote fraud to tip election to Trump, AP finds. Associated Press. https://apnews.com/article/voter-fraud-election-2020-joe-biden-donald-trump-7fcb6f134e528fee8237c7601db3328fGoogle Scholar
Center for Internet Security. (n.d.). Elections Infrastructure Information Sharing & Analysis Center. Center for Internet Security. https://cisecurity.org/ei-isacGoogle Scholar
Center for Strategic and International Studies. (2008, December 8). Commission on cybersecurity for the 44th presidency. Center for Strategic and International Security. https://csis.org/programs/strategic-technologies-program/archives/cybersecurity-and-governance/other-projects-2Google Scholar
Cybersecurity & Infrastructure Security Agency. (2020). Joint statement from Elections Infrastructure Government Coordinating Council & the Election Infrastructure Sector Coordinating Executive Committees. Cybersecurity & Infrastructure Security Agency. https://cisa.gov/news-events/news/joint-statement-elections-infrastructure-government-coordinating-council-electionGoogle Scholar
Cybersecurity & Infrastructure Security Agency. (2024). Cybersecurity toolkit and Resources to protect elections. Cybersecurity & Infrastructure Security Agency. https://cisa.gov/cybersecurity-toolkit-and-resources-protect-electionsGoogle Scholar
Department of Homeland Security. (2016, August 15). Readout of Secretary Johnson’s call with state election officials on cybersecurity. Department of Homeland Security. https://dhs.gov/news/2016/08/15/readout-secretary-johnsons-call-state-election-officials-cybersecurityGoogle Scholar
Department of Homeland Security. (2017, July 5). Allegations of unauthorized scans of Georgia voting systems are unsubstantiated. Office of Inspector General. https://oig.dhs.gov/sites/default/files/assets/pr/2017/oigpr-070517-allegations-unauthorized-scans-georgia-voting-systems-unsubstantiated_1.pdfGoogle Scholar
Department of Homeland Security. (2018, August 24). DHS, FBI hold joint briefing for election officials with Facebook and Microsoft. Department of Homeland Security. https://dhs.gov/news/2018/08/24/dhs-fbi-hold-joint-briefing-election-officials-facebook-and-microsoftGoogle Scholar
Department of Homeland Security & Director of National Intelligence. (2016, October 7). Joint statement from the Department of Homeland Security and Office of the Director of National Intelligence on Election Security. Department of Homeland Security. https://dhs.gov/news/2016/10/07/joint-statement-department-homeland-security-and-office-director-national#:~:text=The%20U.S.%20Intelligence%20Community%20(USIC,including%20from%20US%20political%20organizationsGoogle Scholar
Director of National Intelligence. (2021). Foreign threats to the 2020 US federal elections. Director of National Intelligence. https://dni.gov/files/ODNI/documents/assessments/ICA-declass-16MAR21.pdfGoogle Scholar
Federal Election Commission. (2002, May 15). FEC approves new voting systems standards. Federal Election Commission. https://fec.gov/updates/fec-approves-new-voting-systems-standards/Google Scholar
Feldman, A. J., Halderman, A., & Felten, E. W. (2006, September 13). Security analysis of the Diebold AccuVote-TS voting machine. Center for Information Technology Policy. https://citp.princeton.edu/our-work/voting/Google Scholar
Fulghum, C. (2017, February 28). Correspondence between DHS and U.S. Representative Jason Chaffetz. Department of Homeland Security. https://dhs.gov/sites/default/files/publications/Correspondence%20between%20DHS%20and%20U.S.%20Representative%20Jason%20Chaffetz%20%28R-UT%29.pdfGoogle Scholar
Government Accountability Office. (2005). Federal efforts to improve security and reliability of electronic voting systems are under way, but key activities need to be completed. GAO-05-956.Google Scholar
Government Accountability Office. (2020, February). DHS Plans Are Urgently Needed to Address Identified Challenges before the 2020 Elections. U.S. Government Accountability Office. https://gao.gov/assets/710/706312.pdfGoogle Scholar
Harris, B. (2004). Black Box Voting: Ballot Tampering in the 21st Century. Talion Publishing.Google Scholar
Johnson, J. C. (2017, June 21). Statement of Jeh Charles Johnson before the House Permanent Select Committee on Intelligence. Federation of American Scientists. https://irp.fas.org/congress/2017_hr/062117-johnson.pdfGoogle Scholar
Klarman, M. J. (2016). The framers’ coup: The making of the United States Constitution. Oxford University Press.Google Scholar
Klein, C. (2020, July 18). How Selma’s “Bloody Sunday” became a turning point in the Civil Rights Movement. History.com. https://history.com/news/selma-bloody-sunday-attack-civil-rights-movementGoogle Scholar
Kohno, T., Stubblefield, A., Rubin, A. D., & Wallach, D. S. (2004). Analysis of an electronic voting system. Proceedings of the IEEE Symposium on Security and Privacy. IEEE Explore.CrossRefGoogle Scholar
Krawchenko, K., Judd, D., Cordes, N., Goldman, J., Flores, R., Shabad, R., et al. (2016, November 3). The John Podesta emails released by WikiLeaks. CBS News. https://cbsnews.com/news/the-john-podesta-emails-released-by-wikileaks/Google Scholar
Lawrence, N. (2006). The machinery of democracy: Voting system security, accessibility, usability, and cost. The Brennan Center for Justice. https://brennancenter.org/sites/default/files/press-releases/The%20Machinery%20of%20Democracy.pdfGoogle Scholar
LebetterJr., C. R. (1995). Arkansas amendment for voter registration without poll tax payment. The Arkansas Historical Quarterly, 54(2), 134–162.Google Scholar
Levitt, J. (2007, November 9). The truth about voter fraud. The Brennan Center for Justice. https://brennancenter.org/our-work/research-reports/truth-about-voter-fraudCrossRefGoogle Scholar
McDaniel, P., et al. (2007, December 7). Everest: Evaluation and validation of election-related equipment, standards and testing. U.S. Election Assistance Commission. https://eac.gov/sites/default/files/eac_assets/1/28/EVEREST.pdfGoogle Scholar
McFadden, C., Arkin, W. M., & Monahan, K. (2018, February 7). Russians penetrated U.S. voter systems, top U.S. official says. NBC News. https://nbcnews.com/politics/elections/russians-penetrated-u-s-voter-systems-says-top-u-s-n845721Google Scholar
Nakashima, E., & Harris, S. (2018, July 13). How the Russians hacked the DNC and passed its emails to WikiLeaks. The Washington Post. https://washingtonpost.com/world/national-security/how-the-russians-hacked-the-dnc-and-passed-its-emails-to-wikileaks/2018/07/13/af19a828-86c3-11e8-8553-a3ce89036c78_story.htmlGoogle Scholar
National Conference of State Legislatures. (2022, November 1). Election administration at state and local levels. National Conference of State Legislatures. https://ncsl.org/elections-and-campaigns/election-administration-at-state-and-local-levelsGoogle Scholar
National Institutes of Science and Technology. (2016, September). Protecting the 2016 elections from cyber and voting machine attacks. National Institutes of Science and Technology. https://nist.gov/speech-testimony/protecting-2016-elections-cyber-and-voting-machine-attacksGoogle Scholar
Norden, L., & Famighetti, C. (2014). America’s voting machines at risk. The Brennan Center for Justice.Google Scholar
Office of the Director of National Intelligence. (2018). Joint statement on election day preparations. Office of the Director of National Intelligence. https://dni.gov/index.php/newsroom/press-releases/press-releases-2018/item/1921-joint-statement-on-election-day-preparationsGoogle Scholar
Office of the Director of National Intelligence. (n.d.). Combating foreign influence in U.S. elections. Office of the Director of National Intelligence. https://dni.gov/index.php/newsroom/press-releases/item/1915-joint-statement-from-the-odni-doj-fbi-and-dhs-combating-foreign-influence-in-u-s-electionsGoogle Scholar
Royce, E. R. (2018, March 23). H.R.1625 – Consolidated Appropriations Act, 2018. Congress.gov. https://congress.gov/bill/115th-congress/house-bill/1625/textGoogle Scholar
Senate Select Committee on Intelligence. (2020, November 10). Russian active measures campaigns and interference in the 2016 U.S. elections. Senate Select Intelligence Committee. https://intelligence.senate.gov/publications/report-select-committee-intelligence-united-states-senate-russian-active-measuresGoogle Scholar
Stanford Law School. (2021, October 15). Timeline of Florida Recount, Florida litigation, and Bush v. Gore. Stanford Law Library. https://guides.law.stanford.edu/c.php?g=991108&p=7170216Google Scholar
Tapper, J., & Venkataraman, N. (2006, November 2). Hackable democracy? ABC News. https://abcnews.go.com/Politics/Vote2006/story?id=2623854&page=1Google Scholar
The Brennan Center Task Force on Voting System Security. (2006). The machinery of democracy: Protecting elections in an electronic world. The Brennan Center for Justice, NYU School of Law. https://brennancenter.org/sites/default/files/publications/Machinery%20of%20Democracy.pdfGoogle Scholar
Thielman, S. (2016, July 26). DNC email leak: Russian hackers Cozy Bear and Fancy Bear behind breach. The Guardian. https://theguardian.com/technology/2016/jul/26/dnc-email-leak-russian-hack-guccifer-2Google Scholar
Trump, D. J. (2018, September 12). Executive Order 13848 – Imposing certain sanctions in the event of foreign interference in a United States election. Authenticated U.S. Government Information. https://govinfo.gov/content/pkg/DCPD-201800593/pdf/DCPD-201800593.pdfGoogle Scholar
Trump White House Archives. (2017, July 13). Presidential advisory commission on election integrity. Trump White House Archives. https://trumpwhitehouse.archives.gov/articles/presidential-advisory-commission-election-integrity/#:~:text=On%20May%2011%2C%202017%2C%20President,serves%20as%20the%20vice%20chair.Google Scholar
U.S. Commission on Civil Rights. (2002, September). Ten-year check-up: Have federal agencies responded to Civil Rights recommendations. U.S. Commission on Civil Rights. https://usccr.gov/files/pubs/archives/10yr02/vol1/vol1.pdfGoogle Scholar
U.S. Department of Commerce. (1969, December 31). Poverty in the United States: 1959 to 1968. Census.gov. https://census.gov/library/publications/1969/demographics/p60-68.pdfGoogle Scholar
U.S. Department of Justice. (2019, February 5). Acting Attorney General and Secretary of Homeland Security Submit Joint Report on Impact of Foreign Interference on Election and Political/Campaign Infrastructure in 2018 Elections. U.S. Department of Justice. https://justice.gov/opa/pr/acting-attorney-general-and-secretary-homeland-security-submit-joint-report-impact-foreignGoogle Scholar
U.S. Department of Justice. (2021). Joint report of the Department of Justice and the Department of Homeland Security on Foreign Interference Targeting Election Infrastructure or Political Organization, Campaign, or Candidate Infrastructure Related to the 2020 US Federal Elections. https://justice.gov/media/1148336/dl?inlineGoogle Scholar
U.S. Election Assistance Commission. (2005). Voluntary Voting System Guidelines Volume 1, Version 1. U.S. Election Assistance Commission. https://eac.gov/sites/default/files/eac_assets/1/28/VVSG.1.0_Volume_1.PDFGoogle Scholar
U.S. Election Assistance Commission. (2021, July). 2020 grand expenditure report. U.S. Election Assistance Commission. https://eac.gov/sites/default/files/paymentgrants/expenditures/2020_State_Grant_Expenditure_Report_FINAL.pdfGoogle Scholar
U.S. Election Assistance Commission. (2022, March 11). Elections – Critical infrastructure. United States Election Assistance Commission. https://eac.gov/election-officials/elections-critical-infrastructureGoogle Scholar
U.S. Election Assistance Commission. (2023, June 7). Help America Vote Act. United States Election Assistance Commission. https://eac.gov/about/help_america_vote_act.aspxGoogle Scholar
U.S. Election Assistance Commission. (n.d.). Election assistance commission plans for use of CARES Act report to the pandemic response accountability committee. United States Election Assistance Commission. https://eac.gov/sites/default/files/paymentgrants/cares/PRAC%20Reports/15011%20EAC%20Report%20on%20CARES%20Funding.pdfGoogle Scholar
U.S. Government Publishing Office. (2019, December 20). Consolidated Appropriations Act, 2020. U.S. Government Publishing Office. https://govinfo.gov/content/pkg/PLAW-116publ93/html/PLAW-116publ93.htmGoogle Scholar
Ura, A. (2021, September 7). Gov. Greg Abbott signs Texas voting bill into law, overcoming Democratic quorum breaks. Texas Tribune. https://texastribune.org/2021/09/01/texas-voting-bill-greg-abbott/Google Scholar
Verified Voting. (2020, August). Electronic poll book use in the United States. Verified Voting. https://verifiedvoting.org/wp-content/uploads/2020/08/Verified-Voting-Electronic-Poll-Book-Use-in-the-United-States-20200831.pdfGoogle Scholar
Verified Voting. (2021, May). The business of voting. Verified Voting. https://verifiedvoting.org/wp-content/uploads/2021/05/the-business-of-voting-single-page.pdfGoogle Scholar
Waldman, M., Weiser, W. R., Moraels-Doyle, S., & Sweren-Becker, E. (2018, August 6). The Effects of Shelby County v. Holder. Brennan Center for Justice. https://brennancenter.org/our-work/research-reports/effects-shelby-county-v-holderGoogle Scholar
Waldman, M., Weiser, W., Morales-Doyle, S., & Sweren-Becker, E. (2021a, October 4). Voting laws roundup. Brennan Center for Justice. https://brennancenter.org/our-work/research-reports/voting-laws-roundup-october-2021Google Scholar
Waldman, M., Weiser, W., Morales-Doyle, S., & Sweren-Becker, E. (2021b, October 1). Voting Rights Act analyses, reports, and explainers. Brennan Center for Justice. https://brennancenter.org/our-work/research-reports/voting-rights-act-analyses-reports-and-explainersGoogle Scholar
Weiner, D. I. (2019, April 30). Fixing the FEC: An agenda for reform. Brennan Center for Justice. https://brennancenter.org/our-work/policy-solutions/fixing-fec-agenda-reformGoogle Scholar
Wilder, W., & Hira, E. (2021, December 15). How the Freedom to Vote Act can blunt the worst of Texas’s Voter Suppression Law. Brennan Center of Justice. https://brennancenter.org/our-work/analysis-opinion/how-freedom-vote-act-can-blunt-worst-texass-voter-suppression-lawGoogle Scholar

References

Al Jazeera English. (2018, February 1). Disinformation and democracy (part I), People and power. YouTube. https://youtube.com/watch?v=eZEz6Pc3Z24Google Scholar
Alba, D., & Satariano, A. (2019, September 26). At least 70 countries have had disinformation campaigns, study finds. The New York Times. https://nytimes.com/2019/09/26/technology/government-disinformation-cyber-troops.htmlGoogle Scholar
Aspinwall, N. (2020, January 10). Taiwan’s war on fake news is hitting the wrong targets. Foreign Policy. https://foreignpolicy.com/2020/01/10/taiwan-election-tsai-disinformation-china-war-fake-news-hitting-wrong-targets/Google Scholar
Bartz, D., & Alper, A. (2022, November 30). U.S. bans new Huawei, ZTE equipment sales, citing national security risk. Reuters. https://reuters.com/business/media-telecom/us-fcc-bans-equipment-sales-imports-zte-huawei-over-national-security-risk-2022-11-25/Google Scholar
Bernstein, T. (2011, April 11). Video: Varieties of authoritarianism – Comparing China and Russia. USC US-China Institute. https://china.usc.edu/video-varieties-authoritarianism-comparing-china-and-russiaGoogle Scholar
Beskow, D. M., & Carley, K. M. (2019). Social cybersecurity an emerging national security requirement. Military Review. March–April 2019, pp. 117–127. https://armyupress.army.mil/Portals/7/military-review/Archives/English/MA-2019/Beskow-Carley-Social-Cyber.pdfGoogle Scholar
Blumenthal, D., & Zhang, L. (2020, July 10). China’s censorship, propaganda & disinformation. Jewish Policy Center. AEI. https://aei.org/articles/chinas-censorship-propaganda-disinformationGoogle Scholar
Bradshaw, S., & Howard, P. N. (2019). The global disinformation order: 2019 global inventory of organised social media manipulation. University of Oxford. Computational Propaganda Research Project. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2019/09/CyberTroop-Report19.pdfGoogle Scholar
Brooks, R. (2020). Research Plan & Weekly Updates. Research Fellowship: Digital Threats to Election Integrity. The Carter Center Democracy Program. Unpublished research.Google Scholar
Castle, S. (2020, July 21). Five takeaways from the report on Russia’s interference in Britain. The New York Times. https://nytimes.com/2020/07/21/world/europe/uk-russia-report-takeaways.htmlGoogle Scholar
Chen, A. (2015, June 7). The agency. The New York Times. https://nytimes.com/2015/06/07/magazine/the-agency.htmlGoogle Scholar
Cole, J. M. (2019). Chinese disinformation in Taiwan. Taiwan Sentinel. https://sentinel.tw/chinese-disinformation-in-taiwanGoogle Scholar
Dearden, L. (2020, July 21). Russia report: Moscow’s disinformation campaign fuelling ‘political extremism’ and division in UK. The Independent. https://independent.co.uk/news/uk/home-news/russia-report-uk-national-security-brexit-terror-islam-a9630126.htmlGoogle Scholar
DiResta, R., & Grossman, S. (2019). Potemkin pages & personas: Assessing GRU online operations, 2014–2019. Stanford: Internet Observatory Cyber Policy Center. https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/potemkin-pages-personas-sio-wp.pdfGoogle Scholar
Dobbins, J., Shatz, J., & Wyne, A. (2019). Russia is a rogue, not a peer; China is a peer, not a rogue: Different challenges, different responses. RAND Corporation. https://rand.org/pubs/perspectives/PE310.html10.7249/PE310CrossRefGoogle Scholar
Ellehuus, R., & Ruy, D. (2020, July 21). Did Russia influence Brexit? Center for Strategic and International Studies (CSIS). https://csis.org/blogs/brexit-bits-bobs-and-blogs/did-russia-influence-brexitGoogle Scholar
Federal Department of Foreign Affairs. (2020). Swiss Cooperation Programme Ukraine 2020–23. Swiss Confederation. https://eda.admin.ch/dam/deza/en/documents/laender/cooperation-programme-ukraine_EN.pdfGoogle Scholar
Funke, D. (2020, September 25). What we know about 2020 election interference: It’s not just Russia. Politifact: The Poynter Institute. https://politifact.com/article/2020/sep/25/what-we-know-about-2020-election-interference-its10.4135/9781529738841.n3CrossRefGoogle Scholar
Garcia, C. (2020, April 17). Untangling the disinformation problem: Russia, China and the West. The Wilson Center. https://wilsoncenter.org/blog-post/untangling-disinformation-problem-russia-china-and-westGoogle Scholar
GEC (2020). GEC special report: August 2020 pillars of Russia’s disinformation and propaganda ecosystem. U.S. Department of State. https://state.gov/wp-content/uploads/2020/08/Pillars-of-Russia%E2%80%99s-Disinformation-and-Propaganda-Ecosystem_08-04-20.pdfGoogle Scholar
Gerasimov, V. (2016). The value of science is in the foresight: New challenges demand rethinking the forms and methods of carrying out combat operations. Military Review. January –February 2016 , pp. 23 –29. https://state.gov/wp-content/uploads/2020/08/Pillars-of-Russia%E2%80%99s-Disinformation-and-Propaganda-Ecosystem_08-04-20.pdfGoogle Scholar
Greene, R., & Triolo, P. (2020, May 8). Will China control the global internet via its Digital Silk Road? Carnegie Endowment for International Peace. https://carnegieendowment.org/2020/05/08/will-china-control-global-internet-via-its-digital-silk-road-pub-81857Google Scholar
Halpert, D. (2020, April 8). Disinformation prevention and defending democracy in Taiwan. Brown Political Review. https://brownpoliticalreview.org/2020/04/disinformation-prevention-and-defending-democracy-in-taiwan/Google Scholar
Hernández, J. , C., & Lee Myers, S. (2020, July 1). As China strengthens grip on Hong Kong, Taiwan sees a threat. The New York Times. https://nytimes.com/2020/07/01/world/asia/taiwan-china-hong-kong.htmlGoogle Scholar
Horton, C. (2019, July 5). Hong Kong and Taiwan are bonding over China. The Atlantic. https://theatlantic.com/international/archive/2019/07/china-bonds-between-hong-kong-and-taiwan-are-growing/593347/Google Scholar
Huang, A. (2020, July). Combatting and defeating Chinese propaganda and disinformation: A case study of Taiwan’s 2020 elections. Belfer Center for Science and International Affairs at the Harvard Kennedy School. https://belfercenter.org/sites/default/files/files/publication/Combatting%20Chinese%20Propaganda%20and%20Disinformation%20-%20Huang.pdfGoogle Scholar
Jeangène Vilmer, J.-B., & Charon, P. (2020, January 21). Russia as a hurricane, China as climate change: Different ways of information warfare. War on the Rocks. https://warontherocks.com/2020/01/russia-as-a-hurricane-china-as-climate-change-different-ways-of-information-warfare/Google Scholar
Kalathil, S. (2020). The evolution of authoritarian digital influence: Grappling with the new normal. PRISM, 9(1), 33 –50. https://ndupress.ndu.edu/Portals/68/Documents/prism/prism_9-1/prism_9-1_33-50_Kalathil-2.pdf?ver=DJRX5DRHKfqeXbyt6et98w%3D%3DGoogle Scholar
Kendall-Taylor, A. & Shullman, D. O. (2018, October 2). How Russia and China undermine democracy. Foreign Affairs. https://foreignaffairs.com/articles/china/2018-10-02/how-russia-and-china-undermine-democracyGoogle Scholar
Kim, J. (2019, December 6). Exploring China’s new narrative on democracy. The Diplomat. https://thediplomat.com/2019/12/exploring-chinas-new-narrative-on-democracy/Google Scholar
Kovalev, A. (2023, June 27). Putin’s strongman image suddenly unravels for Russians. Foreign Policy. https://foreignpolicy.com/2023/06/27/putin-prigozhin-wagner-mutiny-weakness-russians-opposition-war-ukraine/Google Scholar
Lau, S. (2023, March 20). Why Xi Jinping is still Vladimir Putin’s best friend. Politico. https://politico.eu/article/xi-jinping-china-vladimir-putin-russia-best-friend-ally-war-in-ukraine/Google Scholar
Lee Myers, S., & Mozur, P. (2019, August 13). China is waging a disinformation war against Hong Kong protesters. The New York Times. https://nytimes.com/2019/08/13/world/asia/hong-kong-protests-china.htmlGoogle Scholar
Morrison, S., Barnet, B., & Martin, J. (2020, June 23). China’s disinformation threat is real. We need better defences against state-based cyber campaigns. The Conversation. https://theconversation.com/chinas-disinformation-threat-is-real-we-need-better-defences-against-state-based-cyber-campaigns-14104410.64628/AA.tmc6qas56CrossRefGoogle Scholar
Mukherjee, S., & Soderpalm, H. (2020, October 20). Sweden bans Huawei, ZTE from upcoming 5G networks. Reuters. https://reuters.com/#:~:text=STOCKHOLM%20(Reuters)%20%2D%20Sweden%20has,Chinese%20suppliers%20on%20security%20groundsGoogle Scholar
News Literacy Project. (2023). Checkology. News Literacy Project. https://newslit.org/educators/checkology/Google Scholar
Pomerantsev, P. (2019). This is not propaganda: Adventures in the war against reality. PublicAffairs. https://ndupress.ndu.edu/Media/News/News-Article-View/Article/3108041/this-is-not-propaganda-adventures-in-the-war-against-reality/Google Scholar
RAND Corporation. (2023). Tools that fight disinformation online. RAND Corporation. https://rand.org/research/projects/truth-decay/fighting-disinformation/search.htmlGoogle Scholar
Reuters. (2019, December 10). China says Taiwan anti-infiltration bill causes ‘alarm’ for investors. Reuters. https://reuters.com/article/us-china-taiwan-idUSKBN1YF09YGoogle Scholar
Robinson, P. (2020, November 27). The Russian Brexit plot that wasn’t. Strategic Culture Foundation. https://strategic-culture.org/news/2020/11/27/the-russian-brexit-plot-that-wasnt/Google Scholar
Rosenberger, L. (2020). Disinformation disorientation. The Journal of Democracy, 31(1), 203 –207.10.1353/jod.2020.0017CrossRefGoogle Scholar
Rudd, K. (2022, October 10). The World According to Xi Jinping. Foreign Affairs. https://foreignaffairs.com/china/world-according-xi-jinping-china-ideologue-kevin-ruddGoogle Scholar
Sanger, D. and Kanno-Youngs, Z. (2020, September 22). The Russian trolls have a simpler job today. Quote Trump. The New York Times. https://nytimes.com/2020/09/22/us/politics/russia-disinformation-election-trump.htmlGoogle Scholar
Searight, A. (2020, May 8). Countering China’s influence operations: Lessons from Australia. CSIS. https://csis.org/analysis/countering-chinas-influence-operations-lessons-australiaGoogle Scholar
Shepardson, D. (2021, November 11). Biden signs legislation to tighten U.S. restrictions on Huawei, ZTE. Reuters. https://reuters.com/technology/biden-signs-legislation-tighten-us-restrictions-huawei-zte-2021-11-11/Google Scholar
Shuster, S. (2020, October 7). What U.S.-Russia talks on election meddling say about the Kremlin’s shifting strategy. TIME. https://time.com/5897310/russia-us-election-interference/Google Scholar
Sinkkonen, E., & Lassila, J. (2020). Digital authoritarianism in China and Russia: Common goals and diverging standpoints in the era of great-power rivalry. Finnish Institute of International Affairs. FIIA Briefing Paper 294. https://fiia.fi/en/publication/digital-authoritarianism-in-china-and-russiaGoogle Scholar
Sly, L. (2023, June 28). Putin’s standing as global strongman in jeopardy after revolt. The Washington Post. https://washingtonpost.com/world/2023/06/28/putin-strongman-image-damaged/Google Scholar
Sokol, S. (2019, August 2). Russian disinformation distorted reality in Ukraine. Americans should take note. Foreign Policy. https://foreignpolicy.com/2019/08/02/russian-disinformation-distorted-reality-in-ukraine-americans-should-take-note-putin-mueller-elections-antisemitism/Google Scholar
Stahl, J. (2020, August 18). The top five “revelations” of the Senate Intelligence Committee’s Russia Report. Slate. https://slate.com/news-and-politics/2020/08/senate-intelligence-russia-report-mueller-comparison.htmlGoogle Scholar
Stanford History Education Group. (2023). Civil online reasoning: Sorting fact from fiction on the internet. Stanford History Education Group at Stanford University Graduate School of Education. https://online.stanford.edu/courses/gse-xsheg0006-civic-online-reasoning-sorting-fact-fiction-internetGoogle Scholar
Starks, T., Cerulus, L., & Scott, M. (2019, June 5). Russia’s manipulation of Twitter was far vaster than believed. Politico. https://politico.com/story/2019/06/05/study-russia-cybersecurity-twitter-1353543Google Scholar
Steger, I. (2020, January 6). Taiwan’s president is battling a deluge of election-linked homophobic fake news. Quartz. https://qz.com/1780015/taiwan-election-tsai-ing-wen-faces-homophobic-fake-news/Google Scholar
Stolberg, S. G., & Weiland, N. (2020, September 30). Study finds ‘single largest driver’ of coronavirus misinformation: Trump. The New York Times. https://nytimes.com/2020/09/30/us/politics/trump-coronavirus-misinformation.htmlGoogle Scholar
Sukhankin, S. (2019). The Western alliance in the face of the Russian (dis)information machine: Where does Canada stand? University of Calgary: The School of Public Policy Publications, 12(26), 1–31. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3454249Google Scholar
Summers, J. (2017, October 25). Countering disinformation: Russia’s infowar in Ukraine. The Henry M. Jackson School of International Studies at the University of Washington. https://jsis.washington.edu/news/russia-disinformation-ukraine/Google Scholar
The Hill Staff. (2020, August 18). Read: Final Senate Intelligence Committee report on Russian election interference. The Hill. https://thehill.com/policy/national-security/512493-read-final-senate-intelligence-committee-report-on-russian-electionGoogle Scholar
Theohary, C. A. (2018, March 5). Information warfare: Issues for Congress. Congressional Research Service. https://fas.org/sgp/crs/natsec/R45142.pdfGoogle Scholar
Twitter Inc. (2019, August 19). Updating our advertising policies on state media. Twitter. https://blog.twitter.com/en_us/topics/company/2019/advertising_policies_on_state_media.htmlGoogle Scholar
U.S. Department of Homeland Security (2020). Homeland threat assessment. U.S. Department of Homeland Security. https://dhs.gov/sites/default/files/publications/2020_10_06_homeland-threat-assessment.pdfGoogle Scholar
USC US-China Institute. (2007, November 15). White paper on China’s political party system. USC US-China Institute at USC Annenberg. https://china.usc.edu/white-paper-chinas-political-party-system-2007Google Scholar
Visram, T. (2020, June 29). Citizen journalists are documenting COVID in the world’s conflict zones to stop disinformation. Fast Company. https://fastcompany.com/90521383/citizen-journalists-are-documenting-covid-in-the-worlds-conflict-zones-to-stop-disinformationGoogle Scholar
Wallis, J., Uren, T., Thomas, E., Zhang, A., Hoffman, S., Li, L., Pascoe, A., & Cave, D. (2020, June). Retweeting through the great firewall: A persistent and undeterred threat actor. Australian Strategic Policy Institute. International Cyber Policy Centre. https://s3-ap-southeast-2.amazonaws.com/ad-aspi/2020-06/Retweeting%20through%20the%20great%20firewall_0.pdf?zjVSJfAOYGRkguAbufYr8KRSQ610SfRXGoogle Scholar
White House (2015, February). National security strategy. Obama White House Archives. https://obamawhitehouse.archives.gov/sites/default/files/docs/2015_national_security_strategy.pdfGoogle Scholar
Woo, R. (2023, November 6). China, Australia agree to turn the page as tensions ease. Reuters. https://reuters.com/world/asia-pacific/australias-albanese-retraces-historic-beijing-walk-visit-mend-ties-2023-11-06/Google Scholar
Wray, C. (2020, September 17). Worldwide threats to the homeland. Federal Bureau of Investigation. Statement before the House Homeland Security Committee. Washington, D C. https://fbi.gov/news/testimony/worldwide-threats-to-the-homeland-091720Google Scholar
Yuhas, A. (2014, March 17). Russian propaganda over Crimea and the Ukraine: How does it work? The Guardian. https://theguardian.com/world/2014/mar/17/crimea-crisis-russia-propaganda-mediaGoogle Scholar

References

Abraham, K. J. (2022). Midcentury modern: The emergence of stakeholders in democratic practice. American Political Science Review, 116(2), 631644. https://doi.org/10.1017/S0003055421001106CrossRefGoogle Scholar
Anttiroiko, A.-V. (2010). Innovation in democratic e-governance: Benefitting from Web 2.0 applications in the public sector. International Journal of Electronic Government Research (IJEGR), 6(2), 1836. https://doi.org/10.4018/jegr.2010040102CrossRefGoogle Scholar
Apel, D. (2009). Just joking? Chimps, Obama and racial stereotype. Journal of Visual Culture, 8(2), 134142. https://doi.org/10.1177/14704129090080020203CrossRefGoogle Scholar
Askanius, T. (2021). On frogs, monkeys, and execution memes: Exploring the humor-hate nexus at the intersection of Neo-Nazi and Alt-Right movements in Sweden. Television & New Media, 22(2), 147165. https://doi.org/10.1177/1527476420982234CrossRefGoogle Scholar
Atkinson, C. (2010). Does soft power matter? A comparative analysis of student exchange programs 1980–2006. Foreign Policy Analysis, 6(1), 122. https://doi.org/10.1111/j.1743-8594.2009.00099.xCrossRefGoogle Scholar
Bajak, F., & Levy, M. (2023, December 2). Breaches by Iran-affiliated hackers spanned multiple U.S. states, federal agencies say. AP News. https://apnews.com/article/hackers-iran-israel-water-utilities-critical-infrastructure-cisa-554b2aa969c8220016ab2ef94bd7635bGoogle Scholar
Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.10.2307/j.ctv12101zqCrossRefGoogle Scholar
Baudouin, R. (Ed.). (2011). The Ku Klux Klan: A history of racism and violence (6th ed.). Southern Poverty Law Center. https://splcenter.org/sites/default/files/Ku-Klux-Klan-A-History-of-Racism.pdfGoogle Scholar
Bennett, J. (2005). The agency of assemblages and the North American blackout. Public Culture, 17(3), 445465.10.1215/08992363-17-3-445CrossRefGoogle Scholar
Berenskötter, F. (2018). Deep theorizing in International Relations. European Journal of International Relations, 24(4), 814840. https://doi.org/10.1177/1354066117739096CrossRefGoogle Scholar
Berker, S. (2022). Fittingness: Essays in the philosophy of normativity (pp. 2357). Oxford University Press. https://doi.org/10.1093/oso/9780192895882.001.0001CrossRefGoogle Scholar
Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 U.S. Presidential election online discussion. First Monday. https://doi.org/10.5210/fm.v21i11.7090CrossRefGoogle Scholar
Bially Mattern, J. (2005). Why soft power isn’t so soft: Representational force and the sociolinguistic construction of attraction in world politics. Millennium-Journal of International Studies, 33(3), 583612. https://doi.org/10.1177/03058298050330031601CrossRefGoogle Scholar
Bjerg, O. (2016). How is Bitcoin money? Theory, Culture & Society, 33(1), 5372. https://doi.org/10.1177/0263276415619015CrossRefGoogle Scholar
Boichak, O., Hemsley, J., Jackson, S., Tromble, R., & Tanupabrungsun, S. (2021). Not the bots you are looking for: Patterns and effects of orchestrated interventions in the U.S. and German elections. International Journal of Communication (19328036), 15, 814839. https://ijoc.org/index.php/ijoc/article/view/14866Google Scholar
Boikos, C., Moutsoulas, K., & Tsekeris, C. (2014). The real of the virtual: Critical reflections on Web 2.0. tripleC: Communication, capitalism & critique. Open Access Journal for a Global Sustainable Information Society, 12(1), 405412. https://doi.org/10.31269/triplec.v12i1.566Google Scholar
Borsook, P. (1996, August). Cyberselfish. Mother Jones. https://motherjones.com/politics/1996/07/cyberselfish/Google Scholar
Breindl, Y., & Francq, P. (2008). Can Web 2.0 applications save e-democracy? A study of how new internet applications may enhance citizen participation in the political process online. International Journal of Electronic Democracy, 1(1), 1431. https://doi.org/10.1504/IJED.2008.021276CrossRefGoogle Scholar
Brown, M. A., Bisbee, J., Lai, A., Bonneau, R., Nagler, J., & Tucker, J. (2022). Echo chambers, rabbit holes, and algorithmic bias: How YouTube recommends content to real users (SSRN Scholarly Paper 4114905). https://doi.org/10.2139/ssrn.4114905CrossRefGoogle Scholar
Burki, T. (2020). The online anti-vaccine movement in the age of COVID-19. The Lancet Digital Health, 2(10), e504e505. https://doi.org/10.1016/S2589-7500(20)30227-2CrossRefGoogle ScholarPubMed
Cammaerts, B. (2008). Critiques on the participatory potentials of Web 2.0. Communication, Culture and Critique, 1(4), 358377. https://doi.org/10.1111/j.1753-9137.2008.00028.xCrossRefGoogle Scholar
Chatterje-Doody, P., & Crilley, R. (2019). Populism and contemporary global media: Populist communication logics and the co-construction of transnational identities. In Stengel, F. A., MacDonald, D. B., & Nabers, D. (Eds.), Populism and world politics: Exploring inter- and transnational dimensions (pp. 7399). Springer International Publishing. https://doi.org/10.1007/978-3-030-04621-7_4CrossRefGoogle Scholar
Chen, A. Y., Nyhan, B., Reifler, J., Robertson, R. E., & Wilson, C. (2023). Subscriptions and external links help drive resentful users to alternative and extremist YouTube channels. Science Advances, 9(35). https://doi.org/10.1126/sciadv.add8080Google ScholarPubMed
Cirone, A., & Hobbs, W. (2023). Asymmetric flooding as a tool for foreign influence on social media. Political Science Research and Methods, 11(1), 160171. https://doi.org/10.1017/psrm.2022.9CrossRefGoogle Scholar
Costa Sánchez, C., & Piñeiro Otero, T. (2012). Social activism in the Web 2.0. Spanish 15m movement. Vivat Academia, 117, 14581467. https://doi.org/10.15178/va.2011.117E.1458-1467CrossRefGoogle Scholar
DeCook, J., & Forestal, J. (2023). Of humans, machines, and extremism: The role of platforms in facilitating undemocratic cognition. American Behavioral Scientist, 67(5), 629648. https://doi.org/10.1177/00027642221103186CrossRefGoogle Scholar
DiResta, R. (2018, November 8). Of virality and viruses: The anti-vaccine movement and social media. Nautilus Institute. https://nautilus.org/napsnet/napsnet-special-reports/of-virality-and-viruses-the-anti-vaccine-movement-and-social-media/Google Scholar
Dunn Cavelty, M., & Jaeger, M. (2015). (In)visible ghosts in the machine and the powers that bind: The relational securitization of anonymous. International Political Sociology, 9(2), 176194. https://doi.org/10.1111/ips.12090CrossRefGoogle Scholar
Durkee, A. (2023, March 14). Republicans increasingly realize there’s no evidence of election fraud—but most still think 2020 election was stolen anyway, poll finds. Forbes. https://forbes.com/sites/alisondurkee/2023/03/14/republicans-increasingly-realize-theres-no-evidence-of-election-fraud-but-most-still-think-2020-election-was-stolen-anyway-poll-finds/Google Scholar
Elder-Vass, D. (2008). Searching for realism, structure and agency in actor network theory. The British Journal of Sociology, 59(3), 455473. https://doi.org/10.1111/j.1468-4446.2008.00203.xCrossRefGoogle ScholarPubMed
Faisinet, N. (1808). On seduction. The Lady’s monthly museum, or Polite repository of amusement and instruction: Being an assemblage of whatever can tend to please the fancy, interest the mind, or exalt the character of the British fair. / By a society of ladies, 1798–1828, 5, 286–289.Google Scholar
Felman, S. (2003). The scandal of the speaking body: Don Juan with JL Austin, or seduction in two languages. Stanford University Press.Google Scholar
Flink, J. (2000). Ford, Henry (1863–1947), automobile manufacturer. In American National Biography. Oxford University Press. https://anb-org.proxy1.library.jhu.edu/display/10.1093/anb/9780198606697.001.0001/anb-9780198606697-e-1000578Google Scholar
Frankfurt, H. (2005). On bullshit. Princeton University Press.10.1515/9781400826537CrossRefGoogle Scholar
Gallarotti, G. (2011). Soft power: What it is, why it’s important, and the conditions for its effective use. Journal of Political Power, 4(1), 2547. https://doi.org/10.1080/2158379X.2011.557886CrossRefGoogle Scholar
Gallarotti, G. (2022). Esteem and influence: Soft power in international politics. Journal of Political Power, 15(3), 383396. https://doi.org/10.1080/2158379X.2022.2135303CrossRefGoogle Scholar
Gaut, B. (1998). Just joking: The ethics and aesthetics of humor. Philosophy and Literature, 22(1), 5168. https://doi.org/10.1353/phl.1998.0014CrossRefGoogle Scholar
Gilboa, E. (2008). Searching for a theory of public diplomacy. ANNALS of the American Academy of Political and Social Science, 616(1), 5577. https://doi.org/10.1177/0002716207312142CrossRefGoogle Scholar
Goldsmith, B. E., & Horiuchi, Y. (2012). In search of soft power: Does foreign public opinion matter for US foreign policy? World Politics, 64(3), 555585. https://doi.org/10.1017/S0043887112000123CrossRefGoogle Scholar
Goldthau, A., & Sitter, N. (2015). Soft power with a hard edge: EU policy tools and energy security. Review of International Political Economy, 22(5), 941965. https://doi.org/10.1080/09692290.2015.1008547CrossRefGoogle Scholar
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science, 363(6425), 374378. https://doi.org/10.1126/science.aau2706CrossRefGoogle ScholarPubMed
Grossman, L. (2006). You, yes, you, are TIME’s person of the year. TIME Magazine, 168(26), 3841.Google Scholar
Hayden, C. (2012). The rhetoric of soft power: Public diplomacy in global contexts. Lexington Books.Google Scholar
Hayden, C. (2017). Scope, mechanism, and outcome: Arguing soft power in the context of public diplomacy. Journal of International Relations and Development, 20(2), 331357. https://doi.org/10.1057/jird.2015.8CrossRefGoogle Scholar
Henne, P. (2022). What we talk about when we talk about soft power. International Studies Perspectives, 23(1), 94111. https://doi.org/10.1093/isp/ekab007CrossRefGoogle Scholar
Hindman, M., & Barash, V. (2018, October 4). Disinformation, “fake news” and influence campaigns on Twitter. Knight Foundation. https://knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter/Google Scholar
Johnson, C. G. (2011). The urban precariat, neoliberalization, and the soft power of humanitarian design. Journal of Developing Societies, 27(3–4), 445475.10.1177/0169796X1102700409CrossRefGoogle Scholar
Kalnes, O. (2009). Norwegian parties and Web 2.0. Journal of Information Technology & Politics, 6(3–4), 251266. https://doi.org/10.1080/19331680903041845CrossRefGoogle Scholar
Keating, V., & Kaczmarska, K. (2019). Conservative soft power: Liberal soft power bias and the ‘hidden’ attraction of Russia. Journal of International Relations and Development, 22(1), 127. https://doi.org/10.1057/s41268-017-0100-6CrossRefGoogle Scholar
Klein, P. (2008). Web 2.0: Reinventing democracy. CIO Insight, 92, 3036.Google Scholar
Kokas, A. (2022). Trafficking data: How China is winning the battle for digital sovereignty. Oxford University Press. https://doi.org/10.1093/oso/9780197620502.001.0001CrossRefGoogle Scholar
Lee, C., Merizalde, J., Colautti, J., An, J., & Kwak, H. (2022). Storm the Capitol: Linking offline political speech and online Twitter extra-representational participation on QAnon and the January 6 insurrection. Frontiers in Sociology, 7. https://doi.org/10.3389/fsoc.2022.876070CrossRefGoogle Scholar
Lewandowsky, S., Ecker, U., Seifert, C., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106131. https://doi.org/10.1177/1529100612451018CrossRefGoogle ScholarPubMed
Lin, H. (2016, September 20). Attribution of malicious cyber incidents: From soup to nuts. Lawfare. https://lawfareblog.com/attribution-malicious-cyber-incidents-soup-nuts-0Google Scholar
Marlin-Bennett, R. (2004). Knowledge power: Intellectual property, information, and privacy. Lynne Rienner Publishers.10.1515/9781685850906CrossRefGoogle Scholar
Marlin-Bennett, R. (2011). I hear America tweeting and other themes for a virtual polis: Rethinking democracy in the global infotech age. Journal of Information Technology & Politics, 8(2), 129145. https://doi.org/10.1080/19331681.2011.53675CrossRefGoogle Scholar
Marlin-Bennett, R. (2013). Embodied information, knowing bodies, and power. Millennium: Journal of International Studies, 41(3), 601622. https://doi.org/10.1177/0305829813486413CrossRefGoogle Scholar
Marlin-Bennett, R. (2022). Soft power’s dark side. Journal of Political Power, 15(3), 437455. https://doi.org/10.1080/2158379X.2022.2128278CrossRefGoogle Scholar
Marlin-Bennett, R., & Jackson, S. (2022). DIY cruelty: The global political micro-practices of hateful memes. Global Studies Quarterly, 2(2). https://doi.org/10.1093/isagsq/ksac002.CrossRefGoogle Scholar
Morgan, J., & Cappella, J. (2023). The effect of repetition on the perceived truth of tobacco-related health misinformation among U.S. adults. Journal of Health Communication, 28(3), 182189. https://doi.org/10.1080/10810730.2023.2192013CrossRefGoogle ScholarPubMed
Mueller, M. (1999). ICANN and Internet governance: Sorting through the debris of “self‐regulation”. Info, 1(6), 497520. https://doi.org/10.1108/14636699910801223Google Scholar
Mueller, M., Grindal, K., Kuerbis, B., & Badiei, F. (2019). Cyber attribution: Can a new institution achieve transnational credibility? The Cyber Defense Review, 4(1), 107122.Google Scholar
Nietzsche, F. (2012). Beyond good and evil. Andrews UK. http://ebookcentral.proquest.com/lib/jhu/detail.action?docID=977667Google Scholar
Nimmo, B., Eib, C., & Tamora, L. (2019). Cross-platform spam network targeted Hong Kong protests: “Spamouflage dragon” used hijacked and fake accounts to amplify video content. Grafika. https://public-assets.graphika.com/reports/graphika_report_spamouflage.pdfGoogle Scholar
Nye, J. (1990a). Bound to lead: The changing nature of American power. Basic Books.Google Scholar
Nye, J. (1990b). Soft power. Foreign Policy, 80, 153171. https://doi.org/10.2307/1148580CrossRefGoogle Scholar
Nye, J. (2002). The information revolution and American soft power. Asia-Pacific Review, 9(1), 6076. https://doi.org/10.1080/13439000220141596CrossRefGoogle Scholar
Nye, J. (2008). Public diplomacy and soft power. The Annals of the American Academy of Political and Social Science, 616(1), 94109. https://doi.org/10.1177/0002716207311699CrossRefGoogle Scholar
Nye, J. (2019). Soft power and public diplomacy revisited. In Melissen, J. & Wang, J. (Eds.), Debating public diplomacy: Now and next (pp. 720). Brill Nijhoff. http://brill.com/view/book/edcoll/9789004410824/BP000006.xmlCrossRefGoogle Scholar
Nye, J. (2021). Soft power: The evolution of a concept. Journal of Political Power, 14(1), 196208. https://doi.org/10.1080/2158379X.2021.1879572CrossRefGoogle Scholar
Parycek, P., & Sachs, M. (2010). Open government – Information flow in Web 2.0. European Journal of ePractice, 9, 5768.Google Scholar
Piper, J. (2023, September 24). Anti-vaxxers are now a modern political force. Politico. https://politico.com/news/2023/09/24/anti-vaxxers-political-power-00116527Google Scholar
Quarles, C. (1999). The Ku Klux Klan and related American racialist and antisemitic organizations: A history and analysis. McFarland.Google Scholar
Raddaoui, A. (2012). Democratization of knowledge and the promise of Web 2.0: A historical perspective. In Beldhuis, H. (Ed.), Proceedings of the 11th European Conference on E-Learning (pp. 435441). Acad. Conferences Ltd. https://webofscience.com/wos/woscc/full-record/WOS:000321613000053Google Scholar
Reddick, C., & Aikins, S. (2012). Web 2.0 technologies and democratic governance. In Reddick, C. & Aikins, S. (Eds.), Web 2.0 technologies and democratic governance: Political, policy and management implications (pp. 17). Springer. https://doi.org/10.1007/978-1-4614-1448-3_1CrossRefGoogle Scholar
Rosenzweig, R. (1998). Wizards, bureaucrats, warriors, and hackers: Writing the history of the internet. The American Historical Review, 103(5), 1530–1552. https://doi.org/10.2307/2649970CrossRefGoogle Scholar
Schradie, J. (2011). The digital production gap: The digital divide and Web 2.0 collide. Poetics, 39(2), 145168. https://doi.org/10.1016/j.poetic.2011.02.003CrossRefGoogle Scholar
Seduce, v. (2023). OED online (3rd ed.). Oxford University Press. https://oed.com/view/Entry/174721Google Scholar
Siekmeier, J. (2014, June 6). Bolivia shows how Andean nations can be punished by US neoliberal soft power if they refuse to assist in the ‘war on drugs’. https://blogs.lse.ac.uk/usappblog/2014/06/06/bolivia-shows-how-andean-nations-can-be-punished-by-u-s-neoliberal-soft-power-if-they-refuse-to-assist-in-the-war-on-drugs/Google Scholar
Sjoberg, L. (2018). Jihadi brides and female volunteers: Reading the Islamic State’s war to see gender and agency in conflict dynamics. Conflict Management and Peace Science, 35(3), 296311. https://doi.org/10.1177/0738894217695050CrossRefGoogle Scholar
Szabla, M., & Blommaert, J. (2020). Does context really collapse in social media interaction? Applied Linguistics Review, 11(2), 251279. https://doi.org/10.1515/applirev-2017-0119CrossRefGoogle Scholar
Taggart, J., & Abraham, K. (2023). Norm dynamics in a post-hegemonic world: Multistakeholder global governance and the end of liberal international order. Review of International Political Economy, 1–28. https://doi.org/10.1080/09692290.2023.2213441CrossRefGoogle Scholar
Tapscott, D., & Williams, A. (2006). Wikinomics: How mass collaboration changes everything. Portfolio.Google Scholar
Tapscott, D., & Williams, A. (2010). MacroWikinomics: Rebooting business and the world. Portfolio Penguin.Google Scholar
Tella, O. (2023). The diaspora’s soft power in an age of global anti-Nigerian sentiment. Commonwealth & Comparative Politics, 61(2), 177196. https://doi.org/10.1080/14662043.2022.2127826CrossRefGoogle Scholar
The Economist. (2023, March 30). Both America’s political camps agree that TikTok is troubling. The Economist. https://economist.com/united-states/2023/03/30/both-americas-political-camps-agree-that-tiktok-is-troublingGoogle Scholar
The RAND Corporation. (2019). Accountability in cyberspace: The problem of attribution. The RAND Corporation. https://youtube.com/watch?v=ca9xomGmZPcGoogle Scholar
Tiffany, K. (2023, January 24). Twitter has no answers for #DiedSuddenly. The Atlantic. https://theatlantic.com/technology/archive/2023/01/died-suddenly-documentary-covid-vaccine-conspiracy-theory/672819/Google Scholar
Topinka, R. (2018). Politically incorrect participatory media: Racist nationalism on r/ImGoingToHellForThis. New Media & Society, 20(5), 20502069. https://doi.org/10.1177/1461444817712516CrossRefGoogle Scholar
Unkelbach, C., Koch, A., Silva, R., & Garcia-Marques, T. (2019). Truth by repetition: Explanations and implications. Current Directions in Psychological Science, 28(3), 247253. https://doi.org/10.1177/0963721419827854CrossRefGoogle Scholar
Van Dijck, J., & Nieborg, D. (2009). Wikinomics and its discontents: A critical analysis of Web 2.0 business manifestos. New Media & Society, 11(5), 855874. https://doi.org/10.1177/1461444809105356CrossRefGoogle Scholar
Verhage, A. (2009). Between the hammer and the anvil? The anti-money laundering-complex and its interactions with the compliance industry. Crime, Law & Social Change, 52(1), 932. https://doi.org/10.1007/s10611-008-9174-9CrossRefGoogle Scholar
Vidgen, B., Margetts, H., & Harris, A. (2019). How much online abuse is there? A systematic review of evidence for the UK (Policy Briefing, Hate Speech: Measures and Counter Measures). Alan Turing Institute. https://turing.ac.uk/sites/default/files/2019-11/online_abuse_prevalence_full_24.11.2019_-_formatted_0.pdfGoogle Scholar
Whittaker, J., Looney, S., Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1565CrossRefGoogle Scholar
Wilson, E. (2008). Hard power, soft power, smart power. Annals of the American Academy of Political and Social Science, 616(1), 110124. https://doi.org/10.1177/0002716207312618CrossRefGoogle Scholar
Wilson, S., & Wiysonge, C. (2020). Social media and vaccine hesitancy. BMJ Global Health, 5(10). https://doi.org/10.1136/bmjgh-2020-004206CrossRefGoogle ScholarPubMed
Yang, J., & Counts, S. (2010). Predicting the speed, scale, and range of information diffusion in Twitter. Proceedings of the International AAAI Conference on Web and Social Media, 4(1) 355358. https://aaai.org/papers/00355-14039-predicting-the-speed-scale-and-range-of-information-diffusion-in-twitter/10.1609/icwsm.v4i1.14039CrossRefGoogle Scholar
Zaharna, R. (2010). The soft power differential: Mass communication and network communication. In Zaharna, R. (Ed.), Battles to bridges: U.S. strategic communication and public diplomacy after 9/11 (pp. 92114). Palgrave Macmillan. https://doi.org/10.1057/9780230277922_6CrossRefGoogle Scholar
Zahran, G., & Ramos, L. (2010). From hegemony to soft power: Implications of a conceptual change. In Soft power and US foreign policy (1st ed., pp. 1231). Routledge.Google Scholar
Zhong, W. (2023, May 3). Who gets the algorithm? The bigger TikTok danger. Lawfare. https://lawfaremedia.org/article/who-gets-the-algorithm-the-bigger-tiktok-dangerGoogle Scholar
Zuquete, J., & Marchi, R. (2023). Postscript. In Global identitarianism. Routledge.10.4324/9781003197607CrossRefGoogle Scholar

References

Ababneh, K. I., Ahmed, K., & Dedousis, E. (2022). Predictors of cheating in online exams among business students during the Covid pandemic: Testing the theory of planned behavior. International Journal of Management Education, 20(3), 115. https://doi.org/10.1016/j/ijme.2922.100713Google Scholar
Aïmeur, E., Amri, S., & Brassard, G. (2023). Fake news, disinformation and misinformation in social media: a review. Social Network Analysis and Mining, 13(1), 30. https://link.springer.com/article/10.1007/s13278-023-01028-510.1007/s13278-023-01028-5CrossRefGoogle Scholar
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179211. https://doi.org/10.1016/0749-5978(91)90020-TCrossRefGoogle Scholar
Ajzen, I. (2005). Attitudes, personality, and behavior. New York: Open University Press.Google Scholar
Al-Shanfari, I., Yassin, W., Tabook, N., Ismail, R., & Ismail, A. (2022). Determinants of information security awareness and behaviour strategies in public sector organizations among employees. International Journal of Advanced Computer Science and Applications, 13(8), 479490.Google Scholar
Alieva, I., Moffitt, J. D., & Carley, K. M. (2022). How disinformation operations against Russian opposition leader Alexei Navalny influence the international audience on Twitter. Social Network Analysis and Mining, 12(1), 113. https://doi.org/10.1007/s13278-022-00908-6CrossRefGoogle ScholarPubMed
Andersen, J. & Søe, S. O. (2020). Communicative actions we live by: The problem with fact-checking, tagging or flagging fake news – The case of Facebook. European Journal of Communication, 35(2), 126139. https://doi.org/10.1177/0267323119894489CrossRefGoogle Scholar
Arayankalam, J. & Krishnan, S. (2021). Relating foreign disinformation through social media, domestic online media fractionalization, government’s control over cyberspace, and social media-induced offline violence: Insights from the agenda-building theoretical perspective. Technological Forecasting and Social Change, 166, 114. https://doi.org/10.1016/j.techfore.2021.120661CrossRefGoogle Scholar
Benson, T. (2020, July 29). Twitter bots are spreading massive amounts of COVID-19 misinformation. IEEE Spectrum. https://spectrum.ieee.org/twitter-bots-are-spreading-massive-amounts-of-covid-19-misinformationGoogle Scholar
Bontcheva, K., Posetti, J., Teyssou, D., Meyer, T., Gregory, S., Hanot, C., & Maynard, D. (2020). Balancing act: Countering digital disinformation while respecting freedom of expression. United Nations Educational, Scientific and Cultural Organization (UNESCO). https://unesco.org/en/articles/balancing-act-countering-digital-disinformation-while-respecting-freedom-expressionGoogle Scholar
Borges, P. M. & Gambarato, R. R. (2019). The role of beliefs and behavior on Facebook: A semiotic approach to algorithms, fake news, and transmedia journalism. International Journal of Communication, 13, 603618. https://ijoc.org/index.php/ijoc/article/view/10304/2550Google Scholar
Buchanan, T. & Benson, V. (2019). Spreading disinformation on Facebook: Do trust in message source, risk propensity, or personality affect the organic reach of “fake news”? Social Media+ Society, 5(4), 19. https://doi.org/10.1177/2056305119888654Google Scholar
Cabeza-Ramirez, L. J., Sanchez-Canizares, S. M., Santos-Roldan, L. M., & Fuentes-Garcia, F. J. (2022). Impact of the perceived risk in influencers’ product recommendations on their followers’ purchase attitudes and intention. Technological Forecasting and Social Change, 184, 116. https://doi.org/10.1016/j.techfore.2022.121997CrossRefGoogle Scholar
Caramancion, K. M., Li, Y., Dubois, E., & Jung, E. S. (2022). The missing case of disinformation from the cybersecurity risk continuum: A comparative assessment of disinformation with other cyber threats. Data, 7(4), 118. https://doi.org/10.3390/data7040049CrossRefGoogle Scholar
Carrapico, H. & Farrand, B. (2021). When trust fades, Facebook is no longer a friend: Shifting privatisation dynamics in the context of cybersecurity as a result of disinformation, populism and political uncertainty. JCMS: Journal of Common Market Studies, 59(5), 11601176. https://doi.org/10.1111/jcms.13175Google Scholar
Carratalá, A. (2023). Disinformation and sexual and gender diversity in Spain: Twitter users’ response, and the perception of LGBTQI+ organisations. Social Sciences, 12(4), 119. https://doi.org/10.3390/socsci12040206CrossRefGoogle Scholar
Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 17531820.Google Scholar
Chuttur, M. (2009). Overview of the technology acceptance model: Origins, developments and future directions. Sprouts: Working Papers on Information Systems, 9(37), 122.Google Scholar
Collins, B., Hoang, D. T., Nguyen, N. T., & Hwang, D. (2021). Trends in combating fake news on social media – A survey. Journal of Information and Telecommunication, 5(2), 247266. https://doi.org/10.1080/24751839.2020.1847379CrossRefGoogle Scholar
Collins, L. M. (2007). Research design and methods, encyclopedia of gerontology. In Encyclopedia of gerontology (pp. 433442). Elsevier. https://doi.org/10.1016/B0-12-370870-2/00162-1CrossRefGoogle Scholar
Culloty, E., Suiter, J., Viriri, I., & Creta, S. (2022). Disinformation about migration: An age-old issue with new tech dimensions. In World migration report (pp. 123). International Organization for Migration.Google Scholar
Cybersecurity and Infrastructure Security Agency (CISA). (2022). Tactics of disinformation. CISA. https://cisa.gov/sites/default/files/publications/tactics-of-disinformation_508.pdfGoogle Scholar
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information systems: Theory and results. Doctoral dissertation, Massachusetts Institute of Technology. https://researchgate.net/publication/35465050Google Scholar
Diekman, C., Ryan, C. D., & Oliver, T. L. (2023). Misinformation and disinformation in food science and nutrition: Impact on practice. Journal of Nutrition, 153(1), 39. https://doi.org/10.1016/j.tjnut.2022.10.001CrossRefGoogle ScholarPubMed
Edwards, C., Beattie, A. J., Edwards, A., & Spence, P. R. (2016). Differences in perceptions of communication quality between a Twitterbot and human agent for information seeking and learning. Computers in Human Behavior, 65, 666671. https://doi.org/10.1016/j.chb.2016.07.003CrossRefGoogle Scholar
European Commission. (2020). On the European Democracy Action Plan, COM (2020), 790. EU Monitor.Google Scholar
European Commission. (2023). Digital services act: Application of the risk management framework to Russian disinformation campaigns. Publications Office of the European Union.Google Scholar
European Parliament. (2022). Resolution of 9 March 2022 on foreign interference in all democratic processes in the European Union, including disinformation (2020/2268(INI)) (2022/C 347/07). European Parliament. https://europarl.europa.eu/doceo/document/TA-9-2022-0064_EN.htmlGoogle Scholar
European Parliamentary Research Service. (2020, April 9). Disinformation and science: A survey of the gullibility of students with regard to false scientific news. European Parliament, Panel for the Future of Science and Technology (STOA). https://europarl.europa.eu/RegData/etudes/STUD/2020/656300/EPRS_STU(2020)656300_EN.pdfGoogle Scholar
Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. The University of Akron Press.Google Scholar
Fallis, D. (2014). The varieties of disinformation. In Floridi, L., & Illari, P. (Eds.), The philosophy of information quality (pp. 135161). Synthese Library.10.1007/978-3-319-07121-3_8CrossRefGoogle Scholar
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 3950. https://doi.org/10.2307/3151312CrossRefGoogle Scholar
Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM). Sage.Google Scholar
Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Partial least squares structural equation modeling (PLS-SEM) using R: A workbook. Springer Nature.10.1007/978-3-030-80519-7CrossRefGoogle Scholar
Jankelová, N., Joniaková, Z., & Skorková, Z. (2021). Perceived organizational support and work engagement of first-line managers in healthcare – The mediation role of feedback seeking behavior. Journal of Multidisciplinary Healthcare, 14, 31093123. https://doi.org/10.2147/JMDH.S326563CrossRefGoogle ScholarPubMed
Jia, H., Yu, J., Feng, T., Ning, L., Cao, P., Shang, P., Gao, S., & Yu, X. (2022). Factors influencing medical personnel to work in primary health care institutions: An extended theory of planned behavior. International Journal of Environmental Research and Public Health, 19(5), 115. https://doi.org/10.3390/ijerph19052785CrossRefGoogle ScholarPubMed
Jungherr, A., & Schroeder, R. (2021). Disinformation and the structural transformations of the public arena: Addressing the actual challenges to democracy. Social Media+ Society, 7(1), 113. https://doi.org/10.1177/2056305121988928Google Scholar
Khani Jeihooni, A., Layeghiasl, M., Yari, A., & Rakhshani, T. (2022). The effect of educational intervention based on the theory of planned behavior on improving physical and nutrition status of obese and overweight women. BMC Women’s Health, 22(1), 19. https://doi.org/10.1186/s12905-022-01-01593-5CrossRefGoogle ScholarPubMed
Knight Foundation. (2018, October). Disinformation, ‘fake news’ and influence campaigns on Twitter. Knight Foundation. https://s3.amazonaws.com/kf-site-legacy-media/feature_assets/www/misinfo/kf-disinformation-report.0cdbb232.pdfGoogle Scholar
Kock, N. (2014). Stable P value calculation methods in PLS-SEM. ScriptWarp Systems. https://researchgate.net/publication/269989910_Stable_P_value_calculation_methods_in_PLS-SEMGoogle Scholar
Krafft, P. M., & Donovan, J. (2020). Disinformation by design: The use of evidence collages and platform filtering in a media manipulation campaign. Political Communication, 37(2), 194214. https://doi.org/10.1080/10584609.2019.1686094CrossRefGoogle Scholar
Machete, P., & Turpin, M. (2020). The use of critical thinking to identify fake news: A systematic literature review. In Hattingh, M., Matthee, M., Smuts, H., Pappas, I., Dwivedi, Y. K., & Mantymaki, M. (Eds.), Responsible design, implementation and use of information and communication technology (pp. 235246). Springer International Publishing. https://doi.org/10.1007/978-3-030-45002-1_20CrossRefGoogle Scholar
Maheri, M., Rezapour, B., & Didarloo, A. (2022). Predictors of colorectal cancer screening intention based on the integrated theory of planned behavior among the average-risk individuals. BMC Public Health, 22(1), 111. https://doi.org/10.1186/s12889-022-14191-9CrossRefGoogle ScholarPubMed
Marikyan, M., & Papagiannidis, P. (2021). Unified theory of acceptance and use of technology. TheoryHub Book.Google Scholar
Mattioli, R., Malatras, A., Hunter, E. N., Biasibetti Penso, M. G., Bertram, D., & Neubert, I. (2023). Identifying emerging cyber security threats and challenges for 2030. ENISA.Google Scholar
Michaelis, M., Jafarian, J. H., & Biswas, A. (2022). The dangers of money and corporate power relating to online disinformation. In 2022 23rd IEEE international conference on mobile data management (MDM) (pp. 470475). IEEE.10.1109/MDM55031.2022.00101CrossRefGoogle Scholar
Nenadić, I. (2019). Unpacking the “European approach” to tackling challenges of disinformation and political manipulation. Internet Policy Review, 8(4), 122. https://doi.org/10.14763/2019.4.1436CrossRefGoogle Scholar
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. McGraw-Hill.Google Scholar
Ó Fathaigh, R., Helberger, N., & Appelman, N. (2021). The perils of legally defining disinformation. Internet Policy Review, 10(4), 125. https://doi.org/10.14763/2021.4.1584CrossRefGoogle Scholar
Olan, F., Jayawickrama, U., Arakpogun, E. O., Suklan, J., & Liu, S. (2022). Fake news on social media: The impact on society. Information Systems Frontiers, 26, 443–458. https://doi.org/10.1007/s10796-022-10242-zCrossRefGoogle Scholar
Park, E. S., & Park, M. S. (2020). Factors of the technology acceptance model for construction IT. Applied Sciences, 10(22), 115. https://doi.org/10.3390/app10228299CrossRefGoogle Scholar
Pérez-Escolar, M., Lilleker, D., & Tapia-Frade, A. (2023). A systematic literature review of the phenomenon of disinformation and misinformation. Media and Communication, 11(2), 7687. https://doi.org/10.17645/mac.v11i2.6453CrossRefGoogle Scholar
Pierri, F., Artoni, A., & Ceri, S. (2020). Investigating Italian disinformation spreading on Twitter in the context of 2019 European elections. PLOS One, 15(1), 123. https://doi.org/10.1371/journal.pone.0227821CrossRefGoogle ScholarPubMed
Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake detection: A systematic literature review. IEEE Access, 10, 2549425513.10.1109/ACCESS.2022.3154404CrossRefGoogle Scholar
Ringle, C. M., Hair, J. F., & Sarstedt, M. (2014). PLS-SEM: Looking back and moving forward. Long Range Planning, 47(3), 132137. https://doi.org/10.1016/j.lrp.2014.02.008Google Scholar
Romero-Galisteo, R.-P., Gonzalez-Sanches, M., Galvez-Ruiz, P., Palomo-Carrion, R., Casuso-Holgado, M. J., & Pinero-Pinto, E. (2022). Entrepreneurial intention, expectations of success and self-efficacy in undergraduate students of health sciences. BMC Medical Education, 22(1), 17. https://doi.org/10.1186/s12909-022-03731-xCrossRefGoogle ScholarPubMed
Sabi, H. M., Uzoka, F. M. E., Langmia, K., & Njeh, F. N. (2016). Conceptualizing a model for adoption of cloud computing in education. International Journal of Information Management, 36(2), 183191. https://doi.org/10.1016/j.ijinfomgt.2015.11.010CrossRefGoogle Scholar
Savari, M., Mombeni, A. S., & Izadi, H. (2022). Socio-psychological determinants of Iranian rural households’ adoption of water consumption curtailment behaviors. Scientific Reports, 12(1), 112. https://doi.org/10.1038/s41598-022-17560-xCrossRefGoogle ScholarPubMed
Schünemann, W. J. (2022). A threat to democracies?: An overview of theoretical approaches and empirical measurements for studying the effects of disinformation. In Cavelty, M. D. & Wenger, A. (Eds.), Cyber security politics (pp. 3247). Routledge.10.4324/9781003110224-4CrossRefGoogle Scholar
Shirahada, K., & Zhang, Y. (2022). Counterproductive knowledge behavior in volunteer work: Perspectives from the theory of planned behavior and well-being theory. Journal of Knowledge Management, 26(11), 2241. https://doi.org/10.1108/JKM-08-2021-0612CrossRefGoogle Scholar
Statista. (2023). Number of disinformation and pro-Russian Twitter posts in Poland 2023. Statista. https://statista.com/statistics/1365122/number-of-disinformation-and-pro-russian-twitter-posts-poland/Google Scholar
Tenove, C., & Tworek, H. J. S. (2019). Online disinformation and harmful speech: Dangers for democratic participation and possible policy responses. Journal of Parliamentary and Political Law, 13, 215232.Google Scholar
Ternovski, J., Kalla, J., & Aronow, P. (2022). The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety, 1(2), 116. https://doi.org/10.54501/jots.v1i2.28CrossRefGoogle Scholar
Trend Micro. (2020). Malicious uses and abuses of artificial intelligence. United Nations Interregional Crime & Justice Research Institute. https://unicri.it/sites/default/files/2020-11/AI%20MLC.pdfGoogle Scholar
Uyheng, J., Moffitt, J. D., & Carley, K. M. (2022). The language and targets of online trolling: A psycholinguistic approach for social cybersecurity. Information Processing and Management, 59(5), 115. https://doi.org/10.1016/j.ipm.2022.103012CrossRefGoogle Scholar
Vaccari, C. & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media and Society, 6 (1), 113. https://doi.org/10.1177/2056305120903408Google Scholar
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425478. https://doi.org/10.2307/30036540CrossRefGoogle Scholar
Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157178. https://doi.org/10.2307/41410412CrossRefGoogle Scholar
Wardle, C., & Derakhshan, H. (2017, September 27). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277cGoogle Scholar
Watson, A. (2021). Main sources of disinformation as ranked by journalists worldwide as of June 2020. Statista. https://statista.com/statistics/1249671/journalists-cite-sources-disinformation-worldwide/Google Scholar
Weikmann, T., & Lecheler, S. (2022). Visual disinformation in a digital age: A literature synthesis and research agenda. New Media & Society, 25(12), 36963713. https://doi.org/10.1177/14614448221141648.CrossRefGoogle Scholar
Zhang, H., Shi, Z., Chen, J., & Zhang, Z. (2022). Understanding combined health and business risk behavior: Small tourism firm owners reopening amid Covid-19 in Pingyao, China. Behavioral Sciences (Basel), 12(10), 118. https://doi.org/10.3390/bs12100358Google ScholarPubMed
Figure 0

Figure 4.1 Wielding malign soft power.

Figure 1

Figure 5.1 Conceptual model.

Figure 2

Table 5.1a Table 5.1Table 5.1a long description.

Figure 3

Table 5.1b Table 5.1Table 5.1b long description.

Figure 4

Table 5.1c Table 5.1Table 5.1c long description.

Figure 5

Table 5.2 Construct reliability and validity – preliminary model estimated (sample size: 200 records from Poland)

Figure 6

Table 5.3 Construct reliability and validity – secondary model estimated (sample size: 200 records from Poland)

Figure 7

Figure 5.2 The final model with estimated coefficients (sample size: 200 records from Poland).

Figure 8

Table 5.4 PLS Algorithm R2 and Path Coefficients (sample size: 200 records from Poland)Table 5.4 long description.

Figure 9

Table 5.5 Bootstrapping Path Coefficients for final model (sample size: 200 records from Poland)

Figure 10

Table 5.6 Construct reliability and validity – preliminary model estimated (sample size: 165 records from Romania)

Figure 11

Figure 5.3 The final model with estimated coefficients (sample size: 165 records from Romania).

Figure 12

Table 5.7 Construct reliability and validity – secondary model estimated (sample size: 165 records from Romania)

Figure 13

Table 5.8 PLS Algorithm R2 and Path Coefficients (sample size: 165 records from Romania)Table 5.8 long description.

Figure 14

Table 5.9 Bootstrapping Path Coefficients for the final model (sample size: 165 records from Romania)

Accessibility standard: WCAG 2.2 AAA

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and attains the highest (AAA) level of WCAG compliance, optimising the user experience by meeting the most extensive accessibility guidelines.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.
Full alternative textual descriptions
You get more than just short alt text: you have comprehensive text equivalents, transcripts, captions, or audio descriptions for substantial non‐text content, which is especially helpful for complex visuals or multimedia.
Visualised data also available as non-graphical data
You can access graphs or charts in a text or tabular format, so you are not excluded if you cannot process visual displays.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.
Use of high contrast between text and background colour
You benefit from high‐contrast text, which improves legibility if you have low vision or if you are reading in less‐than‐ideal lighting conditions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×