On October 7, 2023, Hamas and Palestinian Islamic Jihad launched a terrorist attack inside Israel that resulted in over 1,200 Israelis killed, over 3,300 wounded, and over 200 people – including Israelis and foreign nationals – taken hostage (Haaretz, 2023; The Economist, 2023). The attack’s unprecedented scale caught the Israeli government and defense forces off-guard (Harding, Reference Harding2023). Subsequently, the Israeli government’s excessive military response to the terror attacks resulted in thousands of Palestinians killed, displaced, and in dire need of humanitarian assistance (Pamuk, Reference Pamuk2023). As with other contemporary military conflicts, information warfare is integrated into offensive capabilities by all actors (Mueller et al., Reference Mueller, Jensen, Valeriano, Maness and Macias2023). The attacks on October 7 incorporated coordinated use of video and audio content by Hamas fighters, who subsequently uploaded their content to digital messaging channels, such as the encrypted messaging app Telegram (Frenkel & Myers, Reference Frenkel and Myers2023; Klempner, Reference Klempner2023). Graphic videos and images consequently circulated on popular platforms with insufficient content moderation on X (formerly Twitter), YouTube, Instagram, Facebook, TikTok, and more (Frenkel & Myers, Reference Frenkel and Myers2023; Klempner, Reference Klempner2023).
In line with this, disinformation enters the fray – deployed by official but also commercial actors. Analysts have argued that Israel was not prepared for the resulting information warfare on social media, including Hamas allegedly being aided by Iran and Russia, and to a lesser extent, China (Benjakob, Reference Benjakob2023; Institute for Strategic Dialogue, 2023; Myers & Frenkel, Reference Myers and Frenkel2023). From denying the existence of the October 7 attacks and claiming the Israel Defense Forces (IDF) orchestrated the attacks themselves to the fake or misleading images of explosions in buildings and the hateful provocations that ignore the history of the Israeli–Palestinian conflict (Nechin, Reference Nechin2023), social media feeds have been flooded with misleading or false information around the Israel–Hamas War – and actors on both sides contributed (Nechin, Reference Nechin2023).
This chapter explores Israel’s public and private sector capabilities to counter disinformation, focusing on the period between March and June 2023 through qualitative interviews and thematic analysis. We argue that Israel has not prioritized countering disinformation – mainly because of internal political division and legislative gridlock, but also due to overshadowing security concerns related to Iran, as well as unresolved internal conflicts related to the West Bank and Gaza Strip. Most importantly, we argue that the fight against disinformation has been de-prioritized by the ruling Likud party’s Prime Minister Benjamin Netanyahu who benefits from the divides sewn and support bolstered from the spread of internal disinformation. We review Israel’s mechanisms – both absent and present – to counter disinformation before October 7.
Background: Israeli Judicial Reforms, Democracy, and Disinformation
In May 2023, in Israel, wide-scale protests ultimately forced Prime Minister Benjamin Netanyahu to abandon judicial reforms, which would have threatened the independence of the Israeli judiciary – at least for now (Smith, Chuck, & Zelenko, Reference Smith, Chuck and Zelenko2023). Israel is a relatively young democracy, and arguably, a troubled one (Gavison, Reference Gavison2011). The modern state grew out of Jewish movements that came to the Middle East in search of a safe place for all Jewish people. It was established in 1948 and has regularly fought wars and displaced local populations, as well as countered internal skirmishes over the preceding decades (Jones & Murphy, Reference Jones and Murphy2002). Cyberspace has increased in importance since 2000 – it is not only an added dimension during warfare but also a space where Israel exploits its capabilities to preemptively act against potential threats, exemplified, for example, in the targeted killings of Iranian nuclear scientists or inserting “cyber worms” into the computer systems of Iranian nuclear facilities (Farwell & Rohozinski, Reference Farwell and Rohozinski2011) – those actions are regularly criticized by international human rights lawyers (International Bar Association, 2020).
When shifting focus from external to internal threats, disinformation continues as a cyber-enabled aspect that challenges Israeli democracy. Disinformation is understood as false or misleading information that is intentionally composed and spread – however, it can lose its intention if people, who were not part of the targeted campaign originally, start spreading it themselves, and hence turn it into misinformation (Wardle, Reference Wardle2018). Disinformation can be especially potent in the lead-up to elections when large parts of the population are trying to figure out whom to vote for – and in extension lead their country for the next years (Bader, Reference Bader2018). Israel had five national elections in three years (from 2019 to 2022) (Jewish Virtual Library, 2023). When compared to, for example, the United Kingdom (UK) – a state that aligns with Israel as both political systems do not have a formal, written constitution, but differentiates itself from Israel as Britain (or better, England), traces its democracy to the Magna Carta in the thirteenth century and hence is considered one of the world’s oldest republics (Kiser & Barzel, Reference Kiser and Barzel1991). Israel’s recent political upheavals outplay the UK. While the UK has been derided for having three different prime ministers in one year by domestic and international observers (Langfitt, Reference Langfitt2022), Israel has not been able to solve its cabinet quarrels internally – but instead needed to call on voters repeatedly. This is all to say that Israel can be considered a nascent democracy simply given its short time of existence but also due to the outlined political instabilities, especially since 2020. Some argue that these political instabilities (or in other words, the repeated need for irregular elections) are mainly tied to one person – Prime Minister Benjamin Netanyahu (Berg, Reference Berg2023). The seventy-three-year-old is an Israeli politician who was elected to the office of prime minister for the sixth time in 2022 (Berg, Reference Berg2023). While acting as prime minister, he has also been the center of a criminal trial in which he is accused of bribery, fraud, and breach of trust – charges he fiercely denies (Freidson, Reference Freidson2023). Over the past years, he has grown into a polarizing and divisive figure – most recently because of his decision to form a government with the most right-wing Israeli parties. This coalition is joined at the hip due to:
Their unbridled passion to circumscribe the authority of the courts which, by their narrative, have intervened illegitimately to obstruct the democratic will of the citizenry as it would be implemented by the Knesset majority – i.e., them. Secondary villains on their most-wanted list include the state attorney, the media and the academy, although none of these institutions is a monolith.
Not only do many Israelis consider themselves surrounded by enemies (alluding to the notion that bordering countries either do not accept the Israeli state formally like Syria or actively attack the country like Hezbollah from Lebanon) but Israelis are also astutely aware of internal divisions (Hermann & Anabi, Reference Hermann and Anabi2023). The Times of Israel defined Netanyahu as “indefatigable and ultra-divisive” (Horovitz, Reference Horovitz2022). In the words of one our interviewees, Ofir Barel, who researches Israeli information warfare at an independent institute and has collaborated with the Ministry of Foreign Affairs in the past:
[the] political discourse [is] already tense because in general Netanyahu himself is a very emotional political issue. If any one entity would like to sow chaos here during election or after or between election, [they] just need to mention Netanyahu and it will just work.
He was convinced that previous disinformation campaigns related to recent elections originated domestically and were designed to exploit existing polarizations in the Israeli society. The challenges associated with this type of disinformation ring familiar to other democracies around the world. For example, in the United States domestic disinformation cumulated in the storming of the Capitol on January 6, 2021, a deeply disturbing event for both US citizens and international allies of the United States from around the world (Freedman, Reference Freedman2022). Disinformation should not be shoehorned into a niche, disconnected phenomenon – instead, it should be understood as a cross-cutting issue of cyberspace with imminent offline repercussions. It therefore also needs various actors to work together to counter it most effectively. In other words, both the private and public actors, both within and between nations, should come together to maintain democracy. In the case of Israel, an ambivalent picture emerges regarding the value of public actors’ interventions or even legislative actions aiming to diminish (the effects of) disinformation. This is mainly due to the divided political landscape in the country specifically and the reactive nature of legislative actions generally. However, it is questionable if private actors of the (quite capable) Israeli information technology (IT) sector are useful or even benign forces in this regard. As recent journalistic investigations uncovered, Israeli firms have been involved in developing and selling spy software (spyware), as well as offering propagandists for hire (Kirchgaessner et al., Reference Kirchgaessner, Ganguly, Pegg, Cadwalladr and Burke2023).
We therefore examine specific challenges to the Israeli (dis-)information environment to distill in how far its lessons learned are useful for other democracies around the world. We address the presence – or absence – in legislative policy concerning disinformation and analyze the usability of public–private partnerships for countering disinformation and argue that democracy relies on different parts of society to function in a healthy manner; governmental actors can but should not control everything, and certainly not the information environment. Finally, we address how Israeli insights can be transferred to initiatives in other democracies (or not). The analysis relies on primary sources in Hebrew and English, such as committee hearings in the Knesset (Israel’s parliament), as well as interviews with Israeli stakeholders, such as former officials or currently active private actors addressing disinformation, supplemented by secondary sources on the topic.
Disinformation in Israel: An External and Internal Threat
Israel’s geographic location and geopolitics in the region have shaped and continue to shape its fight against disinformation campaigns. External information threats allegedly stemming from Iran – considered Israel’s main antagonist in the region – have been studied and monitored by the Israeli government and Israeli research institutes. According to Barel, Iran reportedly launched information campaigns that attempted to further divide Israeli citizens along political lines during the recent prime minister elections. However, the origins of numerous recent disinformation campaigns launched in Israel are speculated to have launched internally. Yet, it is important to remember that internal and external threats in Israel are difficult to disentangle, according to Lt. Col. (res.) David Siman-Tov, a Senior Researcher at the Institute for National Security Studies (INSS) and deputy head of the Institute for the Research of the Methodology of Intelligence (IRMI). As is oftentimes the case within information warfare, internal disinformation campaigns echo sentiments that typically derive from external actors. Even so, there are key characteristics that give away the identity of externally launched disinformation campaigns in Israel, observed interviewee Haaretz journalist Omer Benjakob, whose journalistic work covers technology and disinformation. Firstly, Hebrew is both a difficult language to master and is rarely spoken as a first language outside of Israel (BBC, 2014). Therefore, disinformation campaigns employing Hebrew draw attention to themselves by including grammatical errors, typos, or awkward phrases that a native speaker would immediately spot. Secondly, Israel is a small country – approximately the size of the state of New Jersey in the United States – and it is logistically difficult to blatantly fabricate news about events that are supposedly taking place (Nations Online, 2023). As Benjakob described it:
[It’s] very hard to lie at a big level. I can’t say there are riots in a certain part of town because everyone has a cousin who lives in that part of town. [I] can’t say Arabs are murdering people in droves when it’s a twenty-minute drive from your house. That undermines the ability to do a lot of fake news.
Yet, disinformation campaigns targeting Arab voters do exist, even though it is more difficult to spread due to the intricacies of Hebrew and Israel’s small size (Benjakob, Reference Benjakob2021). For example, according to our interviewee Guy Lurie, an attorney and researcher at the Israel Democracy Institute, an independent and nonpartisan center for research in Israel, campaigns aiming to disenfranchise Arab voters were disseminated during Israel’s recent elections. As Lurie described:
Very targeted campaigns we saw in Israel during election … for example in 2015 there was a targeted campaign saying that Arab voters are coming in droves to election booths and delegitimizing Arab citizens … trying to charge the public atmosphere with this kind of tension that would lead supporters of the Right to [the] election booth.
Disinformation campaigns created to rally supporters from right-wing parties – such as current Prime Minister Netanyahu’s Likud party – were popular during recent Israeli elections. According to Lurie, there are popular conspiracy theories claiming Netanyahu’s corruption charges are instituted by a “leftist deep state.” These conspiracies have been popular since 2016 when Netanyahu was charged for, as The New York Times described it, “a habit of performing official favors for wealthy businessmen in exchange for gifts both material and intangible” (Joseph & Kingsley, Reference Joseph and Kingsley2022). These conspiracies claim that Israel’s top law and enforcement ministers charged Netanyahu with corruption because they are attempting to preserve their power and bring down a popular elected leader. As Lurie observes, this “witch hunt” conspiracy is a very powerful rhetorical tool that has sparked distrust and anger from the Israeli people toward public institutions. Furthermore, recent judicial reform protests in Israel have provided fertile ground for disinformation campaigns. Via the social media platform X (previously Twitter) but also messaging app WhatsApp, actors supporting the judicial reforms created fake accounts that encouraged protestors to engage in violence against police officers in the hope of discrediting the protest. Likewise, fake accounts on X spread malicious information against senior Likud member of the Knesset Yuli Edelstein. Edelstein, one of the few Likud members who have voiced opposition to the judicial reform legislation, was targeted by accounts who claimed that he was “a betrayer, a Ukrainian, born in the Soviet Union,” observed Barel (Hauser-Tov, Reference Hauser-Tov2023).
It is unsurprising that the most popular social media platforms in Israel are beholden to the spread of the most noted disinformation since political disinformation’s global presence is often studied and analyzed on social media platforms. X and Facebook have been sources of fake accounts spreading disinformation and WhatsApp has been cited as a place where group chats are infiltrated by malicious actors. Accounts created by actors who support the judicial reforms reportedly entered activist group chats on WhatsApp to sow chaos and create factions within the anti-judicial reform movement. For example, research conducted at Tel Aviv University found an instance where these actors posted provocative material in these activist groups, such as comparing Netanyahu to Hitler, that was then pointed to by the prime minister to claim that “radical left activists” were comparing him to Hitler, observed Barel. While disinformation campaigns are identified and analyzed by researchers in Israel, the question of what measures currently or should exist to stop the spread of this malicious content remains open-ended and complex.
Countering Disinformation in Israel: Lacking Coherence, Planning, and Responsiveness
There is existing, yet limited, legislature enabling the Israeli government to counter internal disinformation swiftly and effectively. Specifically, the Knesset Elections Law, otherwise known as the Propaganda Methods Law, was created in 1959 to define campaign advertising regulations during Israeli elections (Shwartz Altshuler & Lurie, Reference Shwartz Altshuler and Lurie2015). Elements of the 1959 law are outdated and require updated amendments that address the powerful effects of social media and the internet during campaign seasons, though the law remains an important regulatory baseline for modern political elections (Shwartz Altshuler & Lurie, Reference Shwartz Altshuler and Lurie2015). Finding and securing an opportune occasion to remediate these flaws has been a difficult task for the Israeli judiciary and legislature, as compressed and fast-changing results of five elections in three years offered few periods of enough political stability to craft and establish such improved legislation. Israel’s Central Election Committee proposed recommendations to the Propaganda Methods Law in 2017 that included a principle of transparency, which required political advertisements to identify themselves as such. Under this principle of transparency, political actors or parties must identify themselves within political adverts, including ones on social media (TOI Staff, 2019). This principle was transferred into a judiciary ruling led by then Supreme Court Justice and Chairman of the Central Elections Committee Hanan Melcer (Surkes, Reference Surkes2019; TOI Staff, 2019). This ruling followed insistent objections from the Likud party to ban anonymous political ads online via legislation in 2019 (Surkes, Reference Surkes2019; TOI Staff, 2019). Even without solid legislative backing, new transparency processes enabled the chairperson of the Election Committee to give out injunctions during the 2019 and 2022 campaign season for campaign advertising posing as news stories on social media channels, such as Facebook and WhatsApp. However, Lurie noted that while this principle of transparency is a positive effort toward stopping the spread of political disinformation, more needs to be done:
It’s not enough. Need to include more of transparency, obligation, ability [of the Chairperson] to enforce transparency especially in light of new media, social media and its ability to disseminate information in a very quick manner … Need to create more robust regulatory landscape that allow enforcement of these issues.
However, as noted by Lurie and additional experts, the government’s ability to curb disinformation through legislation enters the sensitive, important realm of freedom of speech debates.
In terms of actual content of election campaigns, even if they seem to be misleading or seem to be disinformation, they [Election Committee] are very restrained and very reluctant to enter the fray in terms of freedom of speech. Even when they are intervening, evidentiary basis must be very strong.
In this regard, CEO and co-founder of tech company ActiveFence, Noam Schwartz, believes that the most effective fight against disinformation cannot be from the government because of the potential for freedom of speech violations. If the government were to aggressively fight disinformation, they would be “rightly so accused of censorship. The only way to curb disinformation is to go to where the disinformation is being distributed.” Illiberal and authoritarian figures can claim political speech from opponents as disinformation to silence and deter dissent and plurality of voices, such as examples in Egypt or Russia have shown (Jack, Reference Jack2022; Jumet, Reference Jumet, Cavatorta, Mekouar and Topak2022). Researchers invested in preserving Israeli democracy strive to imagine a balance of freedom of political speech while still holding malicious parties accountable. Additionally, Schwartz contemplated what the Israeli government would look like if it did block swaths of domains from the internet because they deemed it “disinformation.” “We’ll have a very different regime if they do that,” said Schwartz. “All of the sudden the government is not a democracy anymore. They are deciding for you.” Who needs to regulate, what should be regulated, and how to regulate are complex and universal questions for any democratic government approaching legislation toward fighting disinformation. Even the plausible applications of government interference in the disinformation fight are questioned by Schwartz:
Israel can’t tell Facebook what to do. They have no teeth, the government. What will they say? ‘You are not allowed to operate in this country anymore.’ They can’t really because Facebook’s power is so much business.
Even with the creation of a more robust landscape for countering disinformation, deciding who is responsible to create and enforce legislation is unclear. “The problem is Israel cannot find a dedicated team to countering disinformation,” said Barel. “Efforts are ad hoc, not systematic.” It is also unclear what is appropriate to regulate and how to regulate it. For example, campaign advertising and campaign finance laws are entirely different domains in Israel that would both need to be addressed through robust regulatory efforts. The recent flux of governments, minister officials, and civil servants in Israel has left a noted void in both expertise about and legislative enthusiasm to counter internal disinformation.
Countering Disinformation in Israel: What About the Private Sector?
Benjakob, defeatedly, believes that the only real solutions currently on the table are stemming from social media companies who are primarily concerned with attracting eyeballs to screens for advertising dollars.
At the legislation level it’s really a game of whack-a-mole. You reach the really sad conclusion that the only mechanism we actually have against disinformation is social media platforms user guidelines, which to me is tantamount to saying we have nothing in place.
Placing the responsibility on the tech companies to counter internal disinformation in Israel is a double-edged sword. On the one hand, the tech companies have the technological accessibility into their platforms and the expertise to find, identify, report, and remove harmful content online. According to Schwartz, platforms have vastly improved the identification and removal of harmful content online over Israel’s past five election campaigns in three years. Online disinformation surrounding these recent, reiterative elections has served as an optimal test ground for tech companies to improve their strategies to fight disinformation. However, while technologists at platforms are improving, so are the producers of disinformation:
The tricks that used to be common are not that common anymore … What actors are getting better at is creating a more believable narrative. So, they can actually create a deep conviction among the people in what they are saying is true (Noam Schwartz).
Within the past five elections, political parties across the spectrum – including the ruling right Likud party and the opposition centrist party Yesh Atid – have reportedly employed disinformation campaigns that attempted to emotionally resonate with Israeli citizens, observed Schwartz. For example, recent and ongoing judicial reform protests in the first half of 2023 are complex, deeply resonant political fights that have wide-ranging implications for the democratic institutions within Israel. As Schwartz and his team monitored the online space, he commented that:
The Right will say, “Hey, protestors on the right you need to go to demonstrations because the anarchists on the left are ruining this country” … and the Left will say “the Right is trying to make Israel a dictatorship and take it back 1,000 years.”
Since the judicial reforms and the charged debates over Prime Minister Netanyahu spark an intimate, emotional touchpoint for many Israelis, much disinformation – particularly one party lying about what another party said – is propagated through social media channels that hold personal family chats, such as WhatsApp. Thus, these encrypted messaging apps that host private conversations are much more difficult to monitor by content moderators at tech companies. The extent to which these social media platforms can help fight internal disinformation is limited. On the other hand, tech companies do not exist to serve the best interest of the public. As a corporate entity, naturally tech companies aim to grow profit and appease stakeholders. “I don’t think the platforms owe you anything. It’s not a public service. It’s not a penchant fund,” commented Schwartz. While partnerships between the Israeli government and social media companies exist to help fight election season disinformation, for example, Facebook banned certain political ads that aggressively discouraged people to vote in Israel in the November 2022 election, their effectiveness and reach are short term (Jerusalem Post Staff, 2022). As previously noted, the social media companies’ accessibility and expertise toward their tech is not comprehensively understood or effectively regulated by constantly fluctuating government workers. Therefore, Israel’s private spaces – with advanced tech expertise and limited regulation – can flourish in ways that best serve their entrepreneurial interest, rather than prioritizing the public good. Israel’s high-tech culture and expertise in cybersecurity threats also pose reasons for both pessimism and optimism in the fight against disinformation. Often coined as a “start-up nation,” Israel has advanced technological skills to fight external information threats, most notably from Iran. “The capabilities of the intelligence community – with an emphasis on the cyber field – can be a reason for optimism regarding the attempts of foreign campaigns,” noted Siman-Tov. While Israel may be adept at pinpointing information threats from foreign entities, this privatization of intelligence is seen as worrisome by Siman-Tov who noted, “At the same time, in front of the internal processes there are not many reasons for optimism.”
According to Benjakob, the privatization of Israel’s high-tech culture and expertise is noticeably increasing, specifically within the past fifteen to twenty years. Journalistic investigations in Israel within the past year have exposed privatized and professional Israeli-led disinformation operations, for example, the infamous Team Jorge’s undemocratic influence and information operations were exposed by journalistic teams at Haaretz and TheMarker in 2023 (Megiddo & Benjakob, Reference Megiddo and Benjakob2023). Privatized, for-hire intelligence efforts in Israel complicate the systematic fight against disinformation. Benjakob compared Israel’s disinformation boutiques to a larger country like the United States:
The way we’ve turned our high-tech stuff and the way we think about it and do it is a bit leaner … That’s what’s so dangerous within the Israeli space, where you have misinformation plus high-tech culture.
However, the rapid development of artificial intelligence (AI) tools may prove to be more dangerous than high-tech boutique disinformation firms in Israel. Organized, coherent fights against internal disinformation in Israel have been complicated by limited jurisdiction over tech platforms, legislative gridlock, and lack of regulative fervor from the Likud government. Therefore, it is unclear whether swift, effective governmental regulation or other advocacy movements will be created to fight against AI tools spreading disinformation, for example, generative AI or robocalls. Schwartz observed the criticalness and long-term nature of the future of disinformation campaigns in Israel:
This is a constant fight. This is an arms race. This is not a fight over regulation. It’s way bigger than that. I don’t think regulation is necessarily the answer. If it is the answer, it needs to move fast.
Governmental regulation and overreliance on tech companies to counter disinformation in Israel are not the only solutions. Interviewees suggested that Israel seriously partake in a movement already enacted within other democracies, such as the UK or the United States: systemic investment in information and media digital literacy (Horrigan, Reference Horrigan2016). Promoting and investing in digital literacy, specifically media or information digital literacy, may help cultivate helpful habits in social media users, such as critically analyzing the source material (i.e., news outlet, sources) and reporting malicious activity online.
However, the responsibility of teaching digital literacy to help counter disinformation in Israel should not just be allocated toward the schools or parents. An onus on the tech platforms to help their users navigate and not believe whatever they read, hear, or see is critical. Just as efforts in the United States have recently put the onus on social media platforms to help young users’ mental health, so should governments put the onus on the platforms to encourage digital literacy (Richtel, Pearson, & Levenson, Reference Richtel, Pearson and Levenson2023). Digital and media literacy research have demonstrated mixed results, as some scholars emphasize the “disconnect between accuracy judgments and sharing intentions” (Sirlin et al., Reference Sirlin, Epstein, Arechar and Rand2021). However, interviewees believed in the value of using their “conscience and their brain,” as interviewee Schwartz put it. For example, when news articles filled with propaganda about recent judicial reforms are shared within family group chats on WhatsApp, it is up to the individual to assess the information presented to them. Information literacy contributes to a healthy, safe online lifestyle. As Schwartz described it:
Conclusion
While countering disinformation can promote a safer online environment, it’s a very “non-sexy topic” in Israel, as Schwartz described it. Generally, the public discourse surrounding disinformation in Israel deprioritizes the topic. Israel considers itself surrounded by adversaries; in its seventy-five-year history, the country has at points been at war with neighbors Egypt, Jordan, Lebanon, and Syria and has yet to come to a peaceful solution about Palestinian statehood, which affects discouraged Palestinians in the West Bank and Gaza Strip (Waxman, Reference Waxman2019). Priorities in Israeli society include terrorist threats and potential nuclear weapons in Iran. In comparison, a fake news account created on X, for example, is considered little threatening, according to Barel. Interviewees agreed that disinformation is indeed a danger for Israeli democracy and should be of high public concern. Yet, there were varied accounts of where the Israeli public should look to see the detriments of disinformation. Barel argued that all Israel needs to do is look at the Capitol riots of January 6, 2021, in the United States to see the consequences of Russian information warfare. Barel compares the political tensions of the United States to the political instability that has been brewing in Israel over many years. “So, if it’s something we don’t take care of today effectively, we may create a monster that will hurt us in, I don’t know, a few years ahead in the future,” observed Barel. Benjakob disagrees and instead points toward smaller, former Soviet Union countries to predict Israel’s future. The way false news and disinformation manifests in Israel is more aligned with the quasi-liberal and increasingly illiberal democracies of Hungary and Poland, rather than the United States. Benjakob made the comparison:
In the US, UK you have this implicit assumption that truth reigns supreme and that people are not lying to you … the rest of the world attuned to that not actually ever being true. If you talk for example to Polish people, Hungarian people, Israelis feel the same, you’re kind of used to living in a world where you accept that there’s kind of some level of manipulation going on … it’s a young country and everyone was on board with the national project early on.
Whether Israel’s future attempts at countering internal disinformation more similarly mirrors older, established democracies – such as the UK or the United States – or younger, evolving democracies – such as Hungary and Poland – is disputed. Yet, a commonly shared belief is that Israel’s current government headed by Netanyahu is not interested in leading the charge against internal disinformation. Coordinated disinformation campaigns that disseminate conspiracies around Netanyahu’s corruption charges and attacks against judicial reform protestors and Likud political opponents work in favor of Netanyahu’s government. Evidence for Netanyahu’s willingness to strike down legislative efforts of transparency in political advertising is evidence of this disinterest. As of this writing, it remains to be seen what action the Netanyahu government will take in response to the Israeli Supreme Court striking down key portions of its judicial reform bill in January 2024.
Furthermore, juggling emerging normalization efforts with Middle Eastern countries such as Saudi Arabia (Berman, Reference Berman2023), maintaining a strategic relationship with the United States among recent criticism over judicial reforms (Pinkas, Reference Pinkas2023), and aims to halt Iran’s nuclear program are governmental priorities that attract media headlines. And the terrorist attack of October 7, coupled with Israel’s aggressive response, will have far-reaching domestic and international political consequences that are yet to be determined. With a strong victory in the winter of 2022, the Right finally has the opportunity to set and push through its agenda in the Knesset. Barel commented on the overarching significance of Netanyahu’s proposed judicial reforms:
This is not just the case of let’s elect a justice in one way or the other, not something like this. The story behind it is much, much bigger. The right-wing in Israel for years felt that it cannot implement its policy … After 4 elections without a clear cut result, now November election finally resulted in a clear cut resolution. The Right wing in Israel sees a historic opportunity to make changes that would service for the long term … When you speak in those terms … disinformation … is secondary for making this historic change.
An outlook for the future of Israeli efforts to counter internal disinformation is mixed. Expert capabilities that its high-tech culture and start-up entrepreneurial spirit brings to the table could be reason for hope – however, those same capabilities have been used for harmful purposes in the past. Persistent protests against judicial reforms speak to Israeli citizens’ desire to protect their democratic institutions. Yet, the disinterest of the government to counter disinformation due to its beneficial value and the public’s redirected attention to critical issues of Iran’s nuclear program and relations with Palestinians deprioritizes the disinformation fight. A hopeful step toward prioritizing disinformation and preserving Israeli democracy is to remember that “Attempts of Iran and other entities make us [Israel] divided,” commented Barel. “We need to remember what makes us [Israel] united.” After the terrorist attacks of October 7, perceived threats and consequences of disinformation in Israel are materialized and being played out across the social media landscape. Similar to other political systems around the world, Israeli tech representatives, government actors, and citizens interested in a prospering democracy need to counter internal and external challenges – among them disinformation.
Introduction
On Saturday, October 7, 2023, Hamas, a Palestinian political and military movement considered a terrorist organization by the European Union, launched a massive and unprecedented attack against Israel from the Gaza Strip, which it has controlled since 2006: Operation Al Aqsa Deluge.
Through meticulous intelligence work, preparation, and training, the Izz al-Din al-Qassam Brigades, the military wing of the movement, managed to thwart Israel’s extensive surveillance apparatus and introduce nearly 3,000 of its men into Israeli territory. The simultaneous attacks by these heavily armed fighters targeted military (military outposts on the outskirts of Gaza) and, more importantly, civilian (kibbutzim and nearby towns, music festival) targets, resulting in numerous casualties: 1,150 deaths, including 775 civilians.Footnote 1
The strategic surprise created by Hamas caught the Israeli state off guard, taking several hours to organize a response and regain control of the territories held by Palestinian terrorists, due to insufficient resources on the ground. In addition to the security forces in the sector who were killed in the early hours of the operation (308 soldiers and 57 policemen), some active soldiers were on leave on this Sabbath and religious holiday, while for several months, the army had been more mobilized in the West Bank where clashes with the Palestinian population are escalating.
For several hours, Hamas thus managed to control large swaths of Israeli territory bordering the Gaza Strip, allowing its members to organize the kidnapping of dozens of Israelis, mostly civilians, with – for the first time – a high proportion of women and children. In total, around 240 people were captured in a matter of hours.
An Unprecedented Attack
Hamas’ attack on October 7 is unprecedented in Israel’s history. Never before has an armed group belonging to a non-state organization penetrated Israeli territory so massively and for so long and carried out such acts. Except for its War of Independence (1948), Israel has always managed to shift the theater of operations outside its national territory: into Egypt during the Suez War (1956), then into the Sinai, West Bank, and Golan Heights during the Six-Day War (1967), and even during the surprise attack that was the Yom Kippur War in 1973. Arab armies penetrated territories occupied by Israel since 1967, but never entered Israeli territory.
Hamas’ attack is therefore unprecedented in every respect: its scale, its number of victims and hostages, the identity of the dead (mostly civilians, including women, the elderly, and children), and the atrocities committed (rape, torture, and decapitated and burned bodies). The fact that this occurred within Israeli territory, in supposedly inviolable places (secure rooms), plunges society into terror and revives the trauma of antisemitism persecution in Europe, against which the State of Israel, created following the Holocaust, was meant to be a bulwark.
Internally, a Tense Moment
Hamas’ attacks come at a time of very high political tensions in Israel, as the country has been undermined for several years by the rise of identity and cultural claims, reflecting the diversity of the Israeli society.Footnote 2 This explosion of the political field is exacerbated by an institutional and electoral systemFootnote 3 that prevents the emergence of a stable majority. Since 2019, the country has therefore experienced no less than five elections in just four yearsFootnote 4 (compared to one every four years usually), either due to a lack of majority or due to too short majorities or coalitions too disparate to hold over time.
Since 2022, however, a coalition seems to have emerged: that between the Israeli right-wing party (Likud) led by Benjamin Netanyahu and various religious and nationalist parties, which crystallize strong opposition due to their radicalism. However, this coalition has been pushing for institutional reform for several months, which, according to the opposition, threatens the democratic foundation of the state. For over a year, hundreds of thousands of people have been gathering every Saturday in Tel Aviv to oppose this project, which deeply divides society.
All these internal tensions have contributed to weakening the very heart of the Israeli security apparatus: Throughout 2023, more than 12,000 reservists have announced their refusal to serve voluntarily in the army, especially in elite units (commandos, intelligence, air force), supported by former senior officials of the Shin Bet (internal intelligence) (Bateman, Reference Bateman2023). Such was the situation in the months preceding the Hamas attack.
Internationally, a Crucial Moment
Beyond internal politics, the attacks on October 7 occurred at a crucial moment for Israeli diplomacy. At the end of 2020, Israel normalized its relations with several Arab countries (United Arab Emirates and Bahrain in September, Sudan in October, and Morocco in December) as part of the Abraham Accords, concluded under the auspices of the United States. Israel was thus reviving its normalization policy toward Arab countries, which had been at a standstill since the peace agreement with Jordan in 1996.
For several months, negotiations had been taking place behind the scenes for Saudi Arabia, a major oil power and a major Sunni Islamic site with the presence of the two holy sites (Medina and Mecca), to join the Abraham Accords. On October 4, 2023, three days before the Hamas attacks, an Israeli minister attended a conference organized by a United Nations (UN) agency in Riyadh, a visit that was unthinkable a few months earlier.
These diplomatic successes for Israel validate the strategy initiated by Benjamin Netanyahu, who has been in power almost uninterrupted since 2009.Footnote 5 It can be summarized as follows: militarily contain the security threat posed by certain Palestinian groups; politically obscure the Palestinian issue; and focus on the Iranian threat, deemed existential, and regional partnerships with Sunni powers.
The sudden return of the Palestinian issue on October 7 thus challenges Netanyahu’s strategy and undermines the entire Israeli foreign policy.
Communication as a Political Weapon
It was in this internal and international context that Israel launched its military response against the Gaza Strip on October 7. In the hours following the attacks, intense bombings targeted infrastructure presumed to belong to Hamas while the country declared a state of war with the recall of over 300,000 reservists, the largest mobilization ever launched by the state. Israeli authorities warned from the outset that the war would be long-lasting, intense, massive, and destructive with a dual objective: to destroy the operational, political, and military capabilities of Hamas in Gaza; and to bring the hostages back to Israel. There is another objective that has not been clearly stated but is evident: to restore Israel’s deterrent capacity vis-à-vis its regional competitors (Hezbollah and Iran). To achieve all these objectives, Israel must ensure it has the military, financial, and political means necessary. For this, the role of communication is fundamental, especially since Israeli authorities are aware of the fractures within their society, know they are being closely watched, and suffer from a degraded image among a portion of global public opinion due to their occupation policy in the West Bank. Therefore, to wage its war against Hamas, Israeli authorities must ensure two conditions at two different geographical levels: (1) internally, securing long-term support from the population, some of whom will go to the front line and risk their lives, while all will be more or less economically affected – a complex objective given the strong divisions and distrust toward the government in the country – and (2) internationally, obtaining diplomatic, financial, and/or military support from its traditional allies within the Western Bloc (Europe and the United States), and the most neutral or least hostile posture possible from partners within the Arab world, whose public opinions are particularly sensitive to the Palestinian cause.
The success of these objectives requires the implementation of an adapted communication strategy that can address them. And indeed, on October 8, the Israeli state deployed a large communication campaign. From afar, this all-encompassing communication strategy, on social networks, in the media, with political leaders, in French, English, and Arabic, may seem haphazard, opportunistic, and ill-suited. However, it has been finely crafted at the highest levels of the politico-military scale and is the result of several decades of doctrinal evolution within the state apparatus. It is coordinated and responds to specific and targeted political and strategic objectives. It is hybrid, as it is designed to reach different targets. It sometimes appears ineffective due to the unpopularity or ridicule it incites, but this does not mean it has not reached its target audience and thus its objectives. In nearly four months of conflict (October 7, 2023–January 27, 2024), Israel thus produced several complementary narratives, published countless informational materials, mobilized thousands of channels within its state apparatus, its civil society, and also in the Israeli and Jewish diaspora worldwide, as well as among its supporters within Arab societies. This chapter aims to understand the informational maneuvers deployed by the State of Israel in the context of its war against Hamas since October 7, but also, and above all, how and why they were implemented.
The first part explains how Israel’s informational and communication apparatus has been structured in recent decades and how the state has evolved its doctrine to face new challenges. It shows that the communication efforts of the Israeli army, initiated in the 1970s, struggled to lead to a coherent strategy and results until the advent of digital and social networks, which have radically changed the game. Without this, it is not possible to understand the scope and speed of the informational actions carried out by Israel immediately after the Hamas attacks.
The second part presents the political objectives sought by Israel and the narratives deployed to achieve them. These narratives are adapted to target audiences, whether they are intended for the Israeli public or for Western public opinion. Finally, Israel has adapted its communication strategy for Arabic-speaking populations, both in terms of content and channels, to reach its target audience.
Before October 7: Perfecting Israel’s Information System
In Israel, the Hebrew word Hasbara is generally used to designate the state’s communication strategy (Goodman, Reference Goodman2011, p. 22). The literal translation of this word means “explanation,”Footnote 6 but its use by Israel refers more to terms such as “justification” or even “propaganda” in its original sense (to persuade).
For half a century, Hasbara was the reference concept in Israel for thinking about the communication strategy that could be implemented by state institutions such as the government, ministries, embassies, the army, or nongovernmental organizations (NGOs) affiliated to the state; in the framework of a very vertical communication.
However, Hasbara as a doctrine failed to adapt properly to the changing international context, the changing nature of Israel’s conflicts, and the transformation of the means of communication. This is why, from the 2000s onwards, two new, complementary concepts began to emerge within the state: “public diplomacy” and “psychological warfare.”
The Limited Successes and Failures of Hasbara
As a concept, Hasbara involves producing and disseminating a narrative to target audiences with the aim of influencing public opinion on a specific political issue related to Israel. Its aim is to win public support and legitimize Israel’s action and creation. But Hasbara’s effectiveness has produced little effect, except perhaps before the creation of Israel.
The Early Years: A Lack of Vision, Will, and Means
Paradoxically, it was before the creation of the State of Israel that Hasbara was at its strongest in the world on the part of the Zionist movement and its representatives, who were seeking the support of the Western powers with a view to creating a state in Palestine under British mandate. Once the state had been created, it invested little in its communications for reasons of means and priority: The country, which was supported by all the major powers, was more concerned with absorbing waves of immigrants, some of whom were Holocaust survivors, developing the Negev desert, and building egalitarian communities (kibbutz); all actions that required no justification for the Israeli authorities, since they seemed so just and moral to them.
The creation of a Hasbara Office within the Ministry of Foreign Affairs, envisaged as early as 1948, did not materialize, and was not finally realized until the late 1960s, but within the Army Spokesman’s Unit (Dover Tsahal); a change which is not insignificant and shows the new preoccupations of the Israeli Hasbara. Indeed, Israel’s image changed radically with the Six-Day War (1967). The perception of a small state surrounded by threatening enemies collapsed after Israel’s lightning victory and conquest of vast territories. It was also at this time that many Arab and Muslim states were newly created as a result of the decolonization movement, intensifying international criticism of Israeli policy to such an extent that Israel decided to create a Ministry of Hasbara in 1974 (one year after the Yom Kippur War). Its main aim was to coordinate all the players involved in the state’s strategic communications. However, a year after its creation, the Ministry was abolished due to excessive competition and intense interministerial rivalry (Shai, Reference Shai2013, p. 116).
Hasbara’s Repeated Failures from the 1970s Onwards
The year 1975 was Israel’s first major diplomatic slap in the face on the world stage, when the UN General Assembly passed a resolution regarding Zionism as a form of racism (supported by seventy-two states, against thirty-five opposed and thirty-two abstaining; United Nations General Assembly, 1975). Unquestionably, this episode marked the failure of Israel’s communication policy, which had been unsuccessful at winning sufficient international support. It is also the origin of Israel’s mistrust of international bodies such as the UN: The Israeli authorities began to display a frank contempt for international criticism, which they consider to be biased.
At the same time, the nature of the conflicts in which Israel was engaged began to change in the 1980s, marking the end of conventional warfare between enemy armies. The outbreak of the First Intifada (1987–1993) pitted the Israeli army against the Palestinian population of the territories Israel had occupied since 1967 (West Bank and Gaza Strip). The information that Israel communicated to explain its actions in the territories was disrupted by the Palestinians, who maintain a close relationship with the foreign journalists who are on the ground, enabling them to tell their own story of the ongoing conflict. Israel’s communication strategy began to show its lack of effectiveness in the field, due to a lack of adaptability.
The same situation arose during Operation Grapes of Wrath (1996), which pitted the Israeli army against Hezbollah, a Shiite militia from southern Lebanon (Razoux, Reference Razoux2006). As a result, the government decided to create a National Hasbara Forum with the same objective as the ministry created two decades earlier; but this initiative met with the same end for the same reasons: Interministerial rivalry prevented concrete progress.
This lack of coordination would prove to be Israel’s undoing during the Second Intifada (2000–2005). The State of Israel, faced with an unprecedented wave of attacks on its soil, found that Palestinian actions were perceived as legitimate resistance to Israelis seen as oppressors. More importantly, the interaction between the Israeli army and the foreign media had changed radically. Foreign journalists were no longer content with reports and briefings from army spokesmen but went directly to officers in the field who had not been trained in communication skills; hence the incoherent and often contradictory messages (Eiland, Reference Eiland2010, pp. 27–37).
Acknowledged Ineffectiveness and a Failed Attempt to Change Doctrine
With the Intifada barely over, the Israeli army found itself in a new theater of operations, in southern Lebanon (2006). Hezbollah leader Hassan Nasrallah’s well-crafted communications, and above all the ineffectiveness of the Israeli Hasbara, helped spread the idea that the conflict had been lost by the army, despite Israel’s military successes on the ground. Israeli communications were so chaotic that a special report was produced by the state comptroller in Israel. This pointed to the absence of a strong conceptual approach to Hasbara and a lack of institutional coordination, leading to the absence of clear guidelines issued by the government and the failure of Israel’s communication (State Comptroller of Israel, 2007).
To remedy the situation, a special commission was set up under the Prime Minister’s Office. To avoid repeating the mistakes of the past, it brought together all the parties involved in the subject: the National Information Board, the Ministerial Committee for National Security, various ministers, as well as the spokesman for the army, and the Police and Internal Intelligence (Shin Bet). However, despite a strong will, the players involved were unable to develop a more precise overall conceptual approach.
According to former Tsahal general Avi Benayahou, the Hasbara concept lacks the sophistication to change and refine the perception of reality (Benayahu, Reference Benayahu2012, pp. 4–9). One of the reasons for this failure is its temporality: Hasbara comes into play just after the fact (post factum), that is, once the action has taken place. However, in the age of digital communication, smartphones, and social networks, this approach of explaining and justifying facts is often useless. Today’s communication requires a complex complementarity between political and military activities and the manipulation of emotions to arouse empathy and support, regardless of the veracity of the facts.
Public Diplomacy and Cognitive Campaigning: Two Complementary Approaches to Hasbara
It was in the early 2000s that Israel realized its inefficiency and inability to transform Hasbara into an operational communication doctrine. The evolution of conflicts combined with the transformation of means of communication, in a strong desire for doctrinal renewal, led to the emergence of two other approaches that were to become fully integrated into Israel’s communication strategy: public diplomacy and the cognitive campaign or psychological warfare (discussed later in the chapter).
Public Diplomacy: The Revival of Communication
Public diplomacy emerged in the mid-2000s at the initiative of the Ministry of Foreign Affairs, which wished to renew the Hasbara approach that had shown its limitations. This new communication doctrine also came at a time when the means of communication were evolving rapidly, and traditional Hasbara channels seemed outdated.
Its objective is radically different: Public diplomacy must not be a post-factum communication, but on the contrary, a long-term policy, whose objective is no longer the dissemination of an Israeli narrative on the ongoing conflict, but a permanent dialogue with foreign and critical players, beyond security issues. It is conceived as a response to the increasingly palpable criticism in Israel and abroad of the very notion of Hasbara, which is translated by the pejorative term propaganda.
In 2012, the Ministry of Foreign Affairs renamed its communications division Hasbara to Public Diplomacy with an evolving objective: no longer just to defend Israel’s legitimacy and policy abroad, but to present the Israeli narrative in all its diversity. In order to disseminate it more widely, a “digital diplomacy” department was created to promote Israeli narratives on all social platforms and in all languages (English, French, Spanish, Russian, Persian, and Arabic).
Another major difference is that, unlike Hasbara, which is conducted solely by state bodies, public diplomacy includes the participation and mobilization of public or para-public agencies, NGOs, opinion leaders and, more broadly, the whole of civil society, in Israel and beyond. Indeed, Israeli public diplomacy intends to rely on the Israeli and Jewish diasporas around the world to relay its messages.
The Various Concentric Circles of the System
At the heart of the system is the Army Spokesman’s Unit, which has been strengthened and supported by other state bodies. These include the Coordination of Government Activities in the Territories (COGAT), responsible for communicating Israeli policy in the occupied territories; the Government Press Office and the National Information Directorate, which report to the Prime Minister’s Office; and, of course, the Ministry of Foreign Affairs, which can rely on its network of embassies and consulates to amplify the Israeli narrative abroad. These institutions form the first circle of actors in public diplomacy.
Next come pro-Israeli NGOs, which are not necessarily Israeli. Among the most active are American NGOs. These are the ones with the biggest budgets, influence, and resources. Some have been around for a long time, such as the American Israel Public Affairs Committee (AIPAC), the main American Jewish pro-Israeli lobby, or the American Jewish Committee (AJC).
Others have been created more recently, precisely to respond in their own way to the failures of Israeli communication and to fight against the intense boycott campaign of Israel initiated by the pro-Palestinian boycott movement BDS (Boycott Divestment Sanction). One of the most active is undoubtedly Stand with Israel, a Los Angeles-based NGO founded in 2001 by three Jewish Americans. With an annual budget in excess of $20 million and offices in eighteen different countries, this NGO carries out actions in favor of Israel in the digital field and on the ground, particularly on campuses. There are many others,Footnote 7 and most of them are now organized into networks to amplify their action. All these NGOs are thus, in one way or another, integrated and taken into account in Israel’s global communications system.
Finally, a third circle has been devised for individuals, opinion leaders or influential Israelis, or people with strong ties to Israel, who live outside Israel and are encouraged to improve Israel’s image and disseminate pro-Israeli narratives to their entourage abroad. To structure these individual initiatives, the Israeli government launched the Masbirim (“explanations”Footnote 8) project in 2010, which aimed to provide ready-to-use narratives on various topics concerning Israel. The initiative was not as successful as had been hoped, but it does demonstrate the willingness shown some fifteen years ago to integrate diaspora personalities, who will play an important role from October 7, 2023, onwards, as discussed later in the chapter.
Expanding the Scope of Psychological Warfare
Psychological warfare is not a new communications concept in Israel. It can be defined as the means of influencing enemy perceptions through the dissemination of false information or disinformation (Galili, Reference Galili2019, pp. 75–91). Given its nature, psychological warfare was mainly used on a confidential basis by the army’s intelligence units for enemy state agencies, and not integrated within the army’s communications services to produce mainstream narratives.
The Failed Beginnings of the Psychological Approach
Things changed with the Second Intifada (2000–2005), when Israel realized that Palestinians were making abundant use of this communication tactic, producing narratives and images for foreign journalists that amplified or distorted reality.Footnote 9
In 2005, the Center for Cognitive Operations was created. Its primary objective is to develop a doctrine and concepts for cognitive campaigns within the army in order to influence the perception, political position, and feelings and sentiments of a target audience. It is within this center that influence methods and various tools for evaluating the impact of campaigns are developed. These concepts were immediately put into practice the following year, with the war against Lebanon (2006). The challenge was to document the war “from the inside,” so as to accompany Israeli victories with a communications campaign tailored to reach the Israeli and Lebanese public. For example, the town of Bint Jbeil in Lebanon was chosen as a target by the Israeli army, which succeeded in occupying it; six years earlier, it was from this town that Hezbollah leader Hassan Nasrallah had given a victorious speech after Israel’s withdrawal from the subject of Lebanon (2000). As part of a cognitive warfare operation, an Israeli colonel decided to hold a speech there after the capture of the town, in front of his soldiers, some of whom filmed and photographed with their own equipment, immortalizing the installation of an Israeli flag replacing that of Hezbollah. But, the doctrine of psychological warfare had not yet been fully integrated: The army spokesman decided not to broadcast the images, which were deemed too amateurish, leaving foreign journalists with no images of the event other than those eventually supplied by Hezbollah.
A first consequence to be drawn from this: Tsahal decided to reinforce its control over information from war zones by preventing the deployment of journalists, particularly foreign journalists, in contact with soldiers. But this strategy quickly backfired, as Tsahal found out during its next conflict: Operation Cast Lead in Gaza against Hamas, in December 2008–January 2009. Israel banned foreign journalists from entering the country, leaving them dependent on sources and correspondents in Gaza, some of whom were working under the control and threat of Hamas. As a result, foreign journalists found themselves inundated with images of Israeli attacks, while Israel, on the defensive, spent most of its communication time minimizing civilian casualties.
The publication of the GoldstoneFootnote 10 report in September 2009 was yet another political setback for Israel, as the document drew up a long list of accusations against the Israeli authorities (and, to a lesser extent, Hamas), accusing them of having committed acts that could constitute war crimes or even crimes against humanity. This shows that, despite the Israeli government’s manifest determination, its communication strategy, even when incorporating new concepts, is still faltering.
The Use of Social Networks
Even so, the army continues its mission and adapts to new means of communication. The early 2000s saw the development of the internet, social networks, and smartphones, revolutionizing the means of communication. In 2009, the Israel Defense Forces (IDF) Spokesman’s Unit opened its own Facebook page, Twitter (rebranded as X since July 2023) account, and YouTube channel to communicate information directly to internet users. A dedicated “new media” unit was created (Amsellem, Reference Amsellem2020).
The first effects were quickly felt. First, during the Gaza flotilla episode (May 2010).Footnote 11 In reaction to the incriminating images provided by the pro-Palestinian activists aboard the flotilla and widely relayed by the mainstream media, Tsahal decided to broadcast its own images just a few hours after the operation. This time, they showed militants armed with iron bars attacking the soldiers as soon as they arrived on the ship. The army had thus chosen to bypass the traditional media to give its perspective on the facts by broadcasting its own content; a procedure which was not common ten years ago and which paid off: international newsrooms were broadcasting the army’s images, which presented the pro-Palestinian militants as violent aggressors, faced with victimized soldiers.
With its new capabilities, Israel is breaking new ground in the digital field of warfare. During the Operation Pillar of Defense (November 2012), the IDF was the first army in the world to use Twitter to announce an imminent attack against an enemy actor on foreign soil. It was during this new conflict that psychological warfare was massively extended on social networks. It was possible to follow, minute by minute, the progress of Israel’s operations, testifying to its determination to impose its information content and combat its critics.
At the same time, and in order to wage psychological warfare, Israel was broadcasting on its networks videos of Palestinian rockets aimed at civilian targets in Israel; Hamas’ executions of Palestinians regarded as collaborators; and, in parallel, other content showing the army’s efforts to avoid civilian deaths. Each of these informational materials served a clear political purpose: to justify the Israeli attack, criminalize Hamas, and defend Israeli tactics on the ground. Noting the effects of its new communication, the IDF Spokesman’s Unit decided to create a unit dedicated to combat documentaries, to produce professional images directly from the battlefield.
Undoubtedly, Israel’s communication strategy was better. The Israeli authorities were able to produce and disseminate their narratives from their images, which were picked up massively on social networks and by foreign media. Psychological warfare worked this time, prompting Israel to strengthen this pole within its army: In 2018, a Department of Influence was created and placed under the control of the Operations Directorate, responsible for force deployment, reflecting a desire to integrate this aspect of Israel’s strategic communication at the very heart of its military operations.
Strong Digital Presence
The digital world is clearly a new theater of operations for Israeli communications. To maximize its influence, Tsahal does not hesitate to break with the reserve of other armies around the world, which are more circumspect when it comes to using social networks. Indeed, for some of its publications, the army seems to have adopted the language, codes, and vocabulary of social networks, sometimes even using ironic and sarcastic humor on sensitive subjects.Footnote 12 This surprising tone is, in fact, a clever strategy on the part of the “new media” unit to broaden its audience beyond its subscribers and detractors. This style of publication, which no other similar player takes on so frankly, is often widely relayed on social networks, precisely because they seem inappropriate – even if it means being criticized.
And it works. The Israeli army accumulates tens of millions of subscribers on various social networks (3.8 million on Facebook, nearly a million on YouTube), sometimes surpassing the US army’s account (on Twitter, e.g., with nearly 2.5 million subscribers); a considerable audience for a country of less than 9 million inhabitants, which also demonstrates the extent to which it is observed.
The advent of social networks radically changes the game for Israel. The traditional tools used by the Hasbara are not adapted to new needs and new ways of communicating. By the 2010s, Israel understood that its operations in the Palestinian territories were difficult to justify to an international audience. At the same time, groups and organizations hostile to Israel continued their work of delegitimization and virulent criticism, themselves producing biased information to broaden their audience via social networks. Where Hasbara used to campaign on university campuses and in international organizations, psychological warfare and public diplomacy are now being deployed more massively in the infosphere, with the aim of producing its own narrative to attack its adversaries head-on and disseminate its own version of the conflict, and more broadly, of Israeli society. All these doctrines have been put to the test in real-life conditions, during armed conflicts whose succession in recent years has required the Israeli authorities to be highly agile in measuring problems, evaluating successes and, finally, continually adapting. This experience proved invaluable for the vast campaign that the country launched on October 7.
Israel’s Communication from October 7 Onwards: An All-Out Strategy with Specific Objectives
Since the advent of social networks, the State of Israel has greatly evolved its doctrine in terms of strategic communication in order to anticipate, with the creation of a variety of narratives. To achieve this, the state has considerably increased its resources, both in terms of recruitment, with the creation of dedicated units or departments within the army or the Ministry of Foreign Affairs, and in terms of distribution channels, since it is present on just about every major social media platform and in the main languages spoken. A whole ecosystem of public and para-public agencies is also functional to distribute varied and targeted information to different audiences.
From October 7 onwards, a new element was added to Israel’s communication strategy, particularly abroad: The Jewish and Israeli diasporas started massively acting as relays for Israeli narratives, so shocked were they by the nature of the attacks. Associations and influential figures mobilized their networks and expertise to relay and even produce information content.Footnote 13 The approach itself is not new, and indeed a number of associations and organizations around the world had set themselves the task of relaying this information; but the scale of the effort is totally unprecedented and has never before reached the level of commitment seen since October 7 among leading figures in the diaspora.
In the wake of the Hamas attacks, the Israeli government’s communications system will be able to deploy all its levers to achieve a number of political objectives aimed at target audiences that differ from one region to another. The number of initiatives, publications, information materials, and actions, both in the physical world and in the cyber field, has been so great that it is not possible to give an exhaustive presentation. Nevertheless, in order to understand them all, we have chosen to structure our approach around the “major informational actions” that bring together the bulk of these initiatives and meet two clear political objectives: raising public awareness and justifying the military response.
Raising Public Awareness
To raise international public awareness of the attacks of October 7, Israel will carry out four informational actions to produce and disseminate its narratives. For the Hebrew state, this communication has several sub-objectives: Internally, the aim is to unite and mobilize the country, whose unity has been weakened, while responding to the high expectations of the population following the attacks; internationally, the challenge is to raise public awareness of the unprecedented nature of the attacks to create a feeling of empathy and support. The ambitions are therefore immense, which explains the extensive resources deployed by the Israeli authorities and their relays around the world.
Intense Digital Campaign
On October 8, Israel launched an intense social networking campaign to raise awareness among Western public opinion. Through hundreds of publications, the Hebrew state hammered home several messages that helped to raise awareness among international public opinion. We have identified three main messages:
Hamas is equivalent to the Islamic State. Many of its publications are accompanied by the hashtag #hamasIsis or #hamasisIsis. The aim is to elicit the same rejection from Western opinion as the macabre acts of Islamic State fighters did a decade ago.
Hamas is the embodiment of Islamist terrorism, which will also target Western countries. Several visuals show armed men in tunnels under the Eiffel Tower or the Louvre Pyramid. The message is clear: Israel is the West’s outpost, and the fight it will wage against Hamas is part of a global struggle by Islamist terrorism against Western countries.
Hamas’ actions are a continuation of the antisemitism acts perpetrated by the Nazis. It can no longer be considered a political or even a terrorist organization. A number of publications have pointed out the anti-Jewish passages in the Hamas charter, while a telephone conversation between a terrorist on October 7 calling his parents to boast that he had killed “10 Jews” – not Israelis – with his own hands was broadcast. In this context, to support the Hamas project, which aims not at a two-state solution, but at “liberating Palestine from the river [Jordan] to the sea [Mediterranean],” is tantamount to supporting this antisemitism project.
The Means Deployed
To support these narratives, Israel chose, for the first time in such a rapid and massive way, to sponsor content, photos, or videos, to enable its messages to be pushed before viewing YouTube videos or directly into the newsfeeds of social network accounts (see Figure 7.1). Thus, all over the world, thousands (if not millions) of internet users found themselves with commercials warning of Hamas attacks. The commercials were also tailored to the target audience: For adults, violent, barely blurred images showed certain Hamas attacks, while before the YouTube videos for children under six (who couldn’t read) were broadcast, a message addressed to parents explained that dozens of Israeli minors had been killed or kidnapped, and that Israel would do whatever was necessary to get them back.

Figure 7.1 Screenshots from social media showing content promoted by Israel through purchased advertising space (“ads”) displayed on users’ accounts, before their video (YouTube) or in their newsfeed (X/Twitter).
Figure 7.1Long description
The screenshots reveal the following:
1. An advertisement displays the words, ceasefire, cease fire, cease fire, and notes that many people use this term without fully understanding its implications. The ad claims that a ceasefire would only enable Hamas, leading to more cycles of violence and terrorism. The instruction, watch, is provided, with a linked video below. The thumbnail shows military personnel stockpiling ammunition.
2. A screenshot of the advertisement on a user's X newsfeed, dated October 11, 2023, shows four photos of devastated buildings accompanied by a few lines of text. The text reads: They went from house to house. Burned people alive. Murdered entire families. Children. Babies. We will not be silent. May the memories of the victims of Kibbutz Beeri be a blessing. This advertisement has garnered 8.5 million views.
3. A YouTube ad titled, Hamas declared war against Israel. Against a dark background, the text reads: To protect our citizens against these barbaric terrorists.
4. A set of three ad video stills. The first still features a young man and woman smiling, with the text, Hamas, a vicious terrorist organization, murdered over 940 overlaid on their photos. The second still shows an older man speaking on C N N, accompanied by text that reads: She was either dead on the left side of the screen, and, Breaking News, C N N at site of deadly Hamas siege on Kibbutz, where 100 killed, below. The third still displays multiple rainbows with the text, by the Hamas terrorists, followed by ISIS within parentheses, overlaid on top.
A journalist (Galer Smith, S. [@sophiasgaler]) on X (previously Twitter) revealed that between October 7 and 19, 2023, eighty-eight different commercials were broadcast in some twenty countries, at a cost of over $7 million. France was by far the most targeted country, with spending estimated at €3.8 million, which would have resulted in almost 445 million screen impressions (number of times the ad was displayed on a screen); followed by Germany (€1.9 million for 231 million impressions). For the period from October 7 to November 6, Google’s advertising transparency center lists around 200 spots paid for by the Israeli Ministry of Foreign Affairs. Interestingly, the same pro-Israeli content received relatively little sponsorship in the United States (around €50,000). Does this mean that Israel takes the support of the US government for granted, as does a large part of its public opinion? If this is the reasoning, it could be considered partly false, given the unprecedented mobilization for Gaza, particularly on American campuses. It would also mean that Israel considers Europe to be a politically uncertain partner, given the considerable financial efforts made by the major European powers.
This advertising campaign organized by Israel is just one part of Israel’s vast digital strategy on social networks. Indeed, since the 2010s, we have seen how the state has strengthened its resources to be more effective on social networks, with the recruitment of people dedicated to “new media” within the army or ministries.
Thus, the official accounts of the State of Israel, its Ministry of Foreign Affairs or Tsahal on the networks (7 million on TikTok; 215,000 on YouTube; 1.5 million on X; 1 million on Instagram; 890,000 on Facebook) accumulate several million subscribers and publish dozens of messages, photos, and videos to disseminate Israeli narratives, even if this is strongly criticized. Indeed, the comments on these publications regularly generate far more negative and pro-Palestinian comments, but this doesn’t stop Israel from continuing to publish in order to get its messages across.
David Sarangua, Director of the Ministry of Foreign Affairs’ Digital Office, estimates that in the first month of the conflict (October 8–November 8, 2023), Israel had published over 2,500 messages in English on its networks, reaching over a billion people, thanks to innovative content and sponsored publications.
The Israeli authorities have also authorized the broadcast of interrogations of Palestinian terrorists arrested on October 7. In an exchange in Arabic with the Israelis who questioned them, the latter declared that their organization, in committing the acts of October 7, had behaved worse than the Islamic State and had the objective of killing civilians and raping women.
The state and its various entities are not the only players. Israeli society has also mobilized to produce original, viral content. Such is the case of the famous Israeli TV show, Eretz Nehederet (“A Wonderful Country”), a comedy program broadcast weekly for the past twenty years. Since October 7, comedians have been staging satirical episodes to denounce the use of civilians by Hamas and antisemitism acts by students on American campuses. These videos have been viewed millions of times on YouTube and relayed massively on social networks.
There’s plenty more content and narratives to criticize Hamas and its supporters, or the silence of certain international organizations in the wake of the attacks. We can’t list them all, but let’s end with the initiative of a rather art-oriented Israeli content creator, Hila Yerushalmi, who produced and broadcast a video on YouTube to denounce the silence of international organizations against the rape of Israeli women. Her content was then relayed by thousands of people as part of a social networking campaign entitled “Rape is not resistance.”
The Israeli Hostage Campaign
Another key aspect of Israel’s communications strategy was the vast campaign for the release of the hostages. Their exceptional numbers and profiles (very young children, women, and the elderly) created a shock in Israel and in the diasporas, which mobilized to demand their release.
Objective and Strategy
This mobilization was not specifically aimed at the Israeli government, whose war aims to secure their release. It was aimed, above all, at the Western world, to arouse emotion and solidarity with the families in their fight for the hostages’ release. One initiative proved particularly successful: the one originally organized by hostage families who wanted to appeal to the world and publicize the victims. Via WhatsApp groups, these families distributed posters with the photos and main details (including age) of all the hostages, framed by large, easily identifiable red stripes and accompanied by a message in English: “Bring them home.” Pages and accounts on social networks were created on October 10 with this name, and the hashtag of the same name was quoted in publications.
The initiative aroused a great deal of interest in Israel, where hundreds of volunteers were taking part in collage campaigns featuring photos of the hostages. It’s interesting to see how digital action produces real-world effects, as the files with the hostages’ photos circulated in various Signal, Telegram and WhatsApp groups, and were then printed and pasted on city walls. Very quickly, the group became structured, collected donations, and benefited from the support of professional figures such as David Meidan, a former Mossad officer who led the negotiations for the release of Israeli soldier Guilad Shalit (2006–2011).
International Relays
The campaign provoked a huge international response, particularly in Western countries, where a number of organizations and personalities, mostly from Jewish communities or the Israeli diaspora, were organizing poster campaigns in various public places or relaying the photos on their social networks. The National Council of Jewish Women in the United States was the first (October 19) to take up this initiative and organize collages in the United States; the Union des étudiants juifs did the same in Belgium, then in France.
Still within the Jewish community, and notably within associative networks, other initiatives accompanied the movement, with the creation of dedicated groups (“the October 7 collective”) and the organization of occasional actions beyond the collages (e.g., weekly gatherings). Various town councils in France relayed the campaign; in Nice, for example, the town council financed street billboards.
The element that certainly contributed most to the popularity of this campaign was the negative reaction of supporters of the Palestinian cause. Believing that these actions amounted to propaganda, or at least failed to shed light on the Palestinian victims of the conflict, dozens of people were filmed tearing down the posters, provoking widespread indignation and helping to amplify the campaign.
Aware of the communicative potential of this campaign for Israel’s image, the Israeli authorities are also investing time and resources to finance larger scale poster campaigns. We learned that the Israeli government had prepared a large-scale campaign to display photos of the hostages in the Netherlands a few days before the International Court of Justice (ICJ) hearings requested by South Africa, which accuses Israel of genocide in Gaza (January 2024). The official aim of the poster campaign was to “raise awareness of the need to free the hostages held captive by Hamas”; but the political maneuvering, given the timing and choice of location, is beyond doubt. However, a dozen poster companies in the Netherlands turned down the contract with the Israeli government, which nevertheless promised to organize several other awareness-raising campaigns in the country.
Field Visits
Israel’s other major communications initiative to raise public awareness of the October 7 attacks was the organization of visits to Israel, sometimes to the scene of the attack.
Traditional Channels and Channels of Communication
While the military response had already begun and the ground operation was being prepared, the State of Israel was organizing visits by foreign delegations who wished to express their solidarity or who had come to see what had happened. These visits were a crucial moment for Israel, as they could raise awareness among a very broad public.
Throughout the month of October, a number of leading political figures (the US president, the German chancellor, the British prime minister, the Italian prime minister, the Dutch prime minister, and the French president) came to Israel to meet politicians, hostage families, and victims. Other politicians (MPs, senators, mayors, presidents of opposition parties) have also visited Israel, and sometimes even the sites of the massacres. In a moment of palpable emotion, several leaders made political statements that met the expectations of the Israeli authorities, but also provoked criticism in their own countries. For example, during her visit to Israel on October 13, the President of the European Commission, Ursula Von Der Leyen, affirmed her unconditional support for Israel, which irritated some European countries concerned about the situation in Gaza. During his visit to Israel on October 24, President Emmanuel Macron spoke of mobilizing the international coalition against the Islamic State to combat Hamas, a statement that upset some 100 French diplomats, who denounced in an internal memo a risk to France’s position in the Arab world.
Beyond the political world, Israel also wanted to show the violence of the attacks to delegations of journalists, so that they too could relay what happened that day. Several of them were invited to visit the scene of the attack, while the army and rescue services were still working in some areas to recover bodies from burned-out houses and blood-covered rooms. Live broadcasts were organized from the scene, sometimes behind trucks filled with unidentified bodies. By mobilizing foreign journalists, Israel is enabling foreign newsrooms to produce and broadcast news content to raise global awareness of the scale of the destruction.
Digital Influence
Israel also mobilized social network influencers to communicate about the attacks. The Israeli Ministry of Foreign Affairs organized visits to the attacked kibbutz by a number of Israeli influencers, whose interests had absolutely nothing to do with politics; the aim was to enable the dissemination of images on various social platforms such as Instagram, X, or TikTok and to reach a wider audience not necessarily interested in political news. The two most talked-about influencers at the time, Maja Kravarusic and Alina Rabinovich, who specialize in fashion and cooking, published a number of images usually reserved for journalists.
Israel, through its Ministry for the Diaspora and the Fight against Antisemitism (The Ministry for Diaspora Affairs and Combating Antisemitism, 2023), explains that it has mobilized Israel’s biggest influencers (TV presenters, models, and actors with tens of millions of subscribers worldwide) for visits to the field to tell, document, broadcast, and bear witness to the October 7 attacks.
This strategy was not limited to Israel. Several French influencers, including one of the best-known, Magalie Berdah, visited Israel at the invitation of an Israeli rescue NGO (Zaka). Although this visit was made possible with the agreement of the Israeli army, it was nonetheless the initiative of a non-state actor, confirming the ability of the Israeli system to rely on relays other than its state institutions, in line with its doctrine of public diplomacy.
Another visit was far more strategic for Israel in terms of digital influence: that of Elon Musk, head of X, who visited a kibbutz near Gaza on November 27 in the presence of the Israeli prime minister. His visit to Israel is significant for several reasons: In addition to his position at X, he is one of the most followed personalities on this network (170 million subscribers), making him a particularly influential intermediary. He is also the head of StarLink, a company that provides internet connection to territories where none exists. However, at the end of October, Israel decided to cut off all internet connections to the Gaza Strip in order to hinder terrorist activity; Musk announced that he would deploy his satellite network to reestablish the connection to Gaza, which greatly concerned the Israeli Minister of Communications, Shlomo Karhi. After Musk’s visit, the same minister proudly announced that he had signed an agreement in principle with StarLink to deploy its satellite network in Gaza only with Israel’s agreement.
Since then, Elon Musk has been a strong supporter of Israel, proudly wearing a “Bring them home” necklace in the shape of a military identification tag, given to him by the mother of a hostage who showed him a video of her son being kidnapped by Hamas.
The Video of the October 7 Attacks
The final key element in Israel’s communication strategy was the broadcast, from October 23 onwards, of a forty-minute film, edited by the Israeli army from video recordings made at the time of the attack. These videos were recovered from several sources, both Israeli (security cameras, witnesses’ and victims’ telephones) and Palestinian (terrorists’ GoPro).
From Communication Object to Political Object
What’s most interesting about this video is not so much its content, most of which can be viewed on various Telegram channels, but the fact that it has become a political communication tool in its own right. Indeed, rather than broadcasting it widely, the Hebrew state chose to include extremely violent sequences (decapitation, execution at close range, violence against children, etc.) to show some of the atrocities perpetrated against its population. Out of respect for the victims and in order to comply with platform distribution rules, the Israeli authorities decided to target the film’s viewers. This decision lends political weight to the video, access to which becomes rare. Indeed, only a handful of people were invited to participate in these viewing sessions, which were usually supervised by an Israeli representative. The effect of scarcity immediately made it a political phenomenon in the media, where a handful of journalists recounted part of what they had seen, using words that amplified the phenomenon (“unbearable images,” “unspeakable horrors,” etc.).
The video was first shown in Israel, to a limited number of foreign journalists who were told that it was forbidden to film the screen; the only images they were allowed to shoot were those showing the journalists’ faces during the broadcast, some of whom were covering their eyes or leaving the room. From a purely communications point of view, these images of distraught journalists were perhaps more effective in achieving the desired effect than broadcasting the gruesome scenes themselves.
The same scenes occurred in other circles: in front of American senators, in the French National Assembly or Senate. The network of Israeli embassies around the world was also mobilized to broadcast the video to selected audiences within their walls. Although the exact list is not known, we do know that screenings of the film were held in Geneva, Berlin, Brussels, and Madrid, as well as in Latin America (Santiago de Chile) and at the UN.
Invited by the French Embassy in Israel, as part of a screening reserved for French researchers, we also viewed this video. Before doing so, we were asked to sign an undertaking – later given to embassy staff – to not film the scenes out of respect for the victims. More interestingly, the document recalled the political objective of this viewing: The video aims to show and prove the atrocities of the October 7 attacks, at a time when voices were being raised around the world to contest their veracity.
Another interesting element: The text submitted explained that Israel’s aim was precisely to target opinion leaders, influential people in political, media, and intellectual circles, so that they could bear witness to what they had seen. In a quick exchange with embassy staff, we learned that screenings had been carried out in Russia and China; we didn’t get any precise answers about the Arab world, where Holocaust denial is most widespread.
International Relays
As with these other campaigns, the Hebrew state was also able to mobilize the Israeli and Jewish diaspora around the world to relay the video. The most emblematic case is that of Israeli actress Gal Gadot. Famous around the world for playing the iconic Wonder Woman character, now based in the United States where she is pursuing her career, Gal Gadot gathers tens of millions of followers on her various social networks (over 100 million on the Instagram platform alone). Sensitized by the attacks of October 7, she managed to get a copy of the video from the Tsahal spokesperson, so that it could be shown at her home to a private circle of personalities from the American film industry.
A few days later, renowned American film director Steven Spielberg announced an initiative to collect video testimonies from survivors of the October 7 event. In France, it was host and producer Arthur who produced and broadcast on CNews a report on these events (“Supernova–massacre at the rave party”) at the end of December 2023. More than a simple assembly of previously published images, this report features previously unpublished content, obtained directly from Israelis present at the scene or from videos of the assailants; all elements that suggest this report could not have been made without authorizations or direct contacts with the Israeli authorities.
As you can see, Israel has succeeded in using filmed scenes of the massacres to create an object of political communication, relayed by its network of embassies abroad and by various players in the diaspora.
Justifying the Military Response
In addition to the various awareness campaigns organized by Israel, the authorities have also organized a dedicated communication campaign to justify and defend its military response. Here, we have identified two main mechanisms implemented mainly by the army, which has absolute control over communications in the military arena.
Here too, the objectives are twofold: to assure the Israeli population that the sacrifice they are making is necessary and worthwhile; and at the same time, to provide proof to the world that Israel is deploying proportionate force to achieve its war aims, while protecting Palestinian civilians as far as possible.
The Army Spokesman’s Press Briefing
Ever since the early days of the conflict, Israelis have been waiting for a live television report on the security situation in the country. Initially scheduled to last just a few minutes, this press briefing had expanded to become a full-fledged press conference, lasting between 20 and 30 minutes. Occasionally, journalists were invited to take the floor to question the spokesman, who presented a whole range of visual representations (maps, satellite images, sound recordings, and videos) to explain current operations and demonstrate Israel’s ability to destroy the threat posed by Hamas.
Explaining the War
The content of this daily news item has evolved with the conflict: Initially confined to the domestic situation, it now presents the situation on all the fronts on which Israel is engaged (West Bank, the North, Syria, Yemen, etc.). Its news content will then be picked up and commented on by all the media in the country, enabling the army’s narrative to spread to the whole of Israeli society and beyond. An English version is also available to enable international newsrooms to directly retrieve information produced by the army. This is a classic Hasbara system: Information is produced and distributed directly by the military authorities to journalists, who then pick it up and distribute it.
On the domestic front, this appointment is a political success, thanks in particular to the stature and personality of the spokesperson, Daniel Hagari, a former commander of the IDF’s most prestigious unit, the Shayetet 13, the elite commandos renowned for carrying out perilous and secret actions in enemy territory. According to a poll carried out five weeks after the start of the conflict, almost three-quarters of Israelis (73.7 percent) consider Daniel Hagari to be the most reliable source of information on the conflict, an interesting figure given that only 4 percent put Prime Minister Netanyahu in first place (Bagno, Reference Bagno2023). Despite the political crisis and Benjamin Netanyahu’s low popularity, Israelis still have confidence in the army and therefore in the state in this war.
Showing Success
The second strong expectation Israelis have of their leaders is their ability to secure the release of the hostages. To achieve this, the Israeli government’s strategy has been to intensify its attacks on Hamas in order to corner it and force it to negotiate a truce in exchange for the release of hostages. This position was not easy to maintain in the face of public opinion, which was very strongly committed to and mobilized for the release of the hostages. Various informational maneuvers by Hamas were aimed precisely at increasing the pressure on Israeli civil society via the hostages, so that it would demand a ceasefire from its government.Footnote 14
The release of around a hundred hostages under an agreement concluded at the end of October between Israel and Hamas via Qatar was seen as a victory for the Israeli government, justifying its military strategy. The state’s communication apparatus therefore used and staged these releases, producing videos and images in which the symbols of the state were omnipresent: Once freed, the families were photographed in front of a large Israeli flag, surrounded by sympathetic soldiers. The images made the rounds on social networks and in the Israeli media.
Videos from the Field
To justify the war, the Hebrew state was not content with producing information from a command center or hospitals in Israel; it organized the production of informational material directly from the combat zones to justify the army’s action.
Live Warfare: Images from the Field
To ensure control of the war news, the army completely closed the Gaza territory to the foreign press; only Al Jazeera, which has correspondents on the spot and is fully established in the territory, could continue to broadcast its own images. The aim is not to make ongoing operations opaque, but, on the contrary, to be able to broadcast information produced and/or controlled by the Israeli army.
The two main producers of content are the Israeli army and foreign journalists authorized to film in areas designated by Tsahal. To be effective, the army deployed troops from the Spokesman’s Unit to film and photograph war scenes. At the same time, several soldiers were equipped with front-facing cameras to capture first-person footage of battle scenes; some of these videos were then retrieved and posted on the army’s various social networks. Finally, reports from occupied areas will be produced entirely by the army, which takes every step to make this production as professional as possible (cameras, no smartphones, drones for large fields, microphones, etc.). The spokesperson, Daniel Hagari, was on hand to show, camera in hand, the Hamas tunnels under Palestinian hospitals.
All these means and devices are aimed at documenting the war and putting across several messages to justify Israel’s action and denounce Hamas’ actions. The main Israeli narrative aims to criminalize Hamas, not just for its actions against Israel but for its attitude that dangerously exposes Gaza’s civilian population. To this end, Israel provides images from the field showing weapon caches, tunnel entrances, and rocket launchers installed directly in schools, mosques, or hospitals. In its videos, Israel also shows how the equipment of international organizations benefits (intentionally or not) Hamas, with the presence of United Nations Relief and Works Agency (UNRWA)Footnote 15 or Red Cross stamped equipment in the terrorists’ caches in the tunnels.
The other aim of these communications operations from the field is also psychological warfare. Israel wants to show its competitors that it is winning the war, that defections and arrests of militants are multiplying, and ultimately that Hamas is completely losing control of the territory. Videos of dozens of Palestinians posing as Hamas members are posted on the networks, with some accusing their leaders of betraying them.
Finally, there’s another source of information from the field that Israel exploited to its advantage: films produced by Hamas fighters themselves who, in a reverse psychological campaign, film their surprise attacks on Israeli tanks or soldiers. On social networks, images from these films are extracted and broadcast by Israel to show that Hamas fighters are purposely dressed as civilians to deceive soldiers and expose real civilians.
Protecting Palestinian Civilians
The other aim of these images from the field is to show that Israel is making considerable efforts to protect the civilian population of Gaza, and that its war is directed exclusively against Hamas. This communication work is not so important for the Israeli population, which is more concerned about the hostages and soldiers involved. On the other hand, it is of vital importance to international leaders, both in the West and among Israel’s Arab partners. Their position of support or moderate criticism of Israel would be more difficult to maintain in the eyes of public opinion if these states did not have guarantees and material proof that Israel is implementing a policy aimed at minimizing civilian casualties and protecting the Palestinian population as much as possible.
To this end, Israel produces a whole series of elements to justify itself: leaflets thrown from an airplane, audio clips of telephone calls advising inhabitants to leave, organization of population movements via secure corridors, resumption of humanitarian truck deliveries, and so on. In addition to protecting civilians, the army is also keen to show how it is helping the population by distributing food and water. All these actions will be relayed on social networks by the army’s official accounts, then by Israel’s entire digital ecosystem.
Conclusion
During this new war, Israel demonstrated it had learned from its previous communication mistakes in past conflicts. Indeed, we can see that the Israeli communication apparatus has been working at full speed, deploying numerous strategies to reach very different target audiences.
This chapter focuses on Israeli actions, all of which have been deployed for Israel and the world, mainly in English-language versions. However, given the nature of the conflict, the historical relationship between Israel and the Arab world, and recent political agreements with a number of Muslim countries (the Abraham Agreement), Israel has also deployed a specific communications strategy aimed at the Arab world. It goes beyond the major Arab media, most of which are believed to deliver a biased discourse according to the Israeli government; this is particularly true of the Qatari channel Al Jazeera, long known for its almost absolute support of the Palestinian cause. Israel’s efforts have not focused specifically on producing content dedicated to the Arab world – although this has been done too – but more on adapting the way this content was broadcast and presented.
Since October 7, and perhaps for the first time in decades, Israel had an urgent need to get its narratives across to wage its war against Hamas. It had to mobilize its deeply divided society around a widely criticized government and, at the same time, provide sufficiently convincing elements for Tsahal to be able to carry out its response with the support (or silence) of its allies and partners.
Was Israel’s communication strategy effective? It all depends on whose objectives. According to a study carried out in mid-December 2023 and broadcast by an Israeli channel, the overwhelming majority of publications on social networks and articles in the major international media concerning the war against Hamas were unfavorable to the Hebrew state. Of the nearly 2 million publications that received at least 500 likes, shares, and comments, 83 percent were against the Hebrew state, compared with 9 percent in support of Israel. So, if Israel had set itself the goal of convincing the majority of internet users of the merits of its military action, the answer is clearly “no.”
But the effectiveness of a communications strategy must be assessed in relation to the political objectives that have been defined. And the question is whether this unpopularity on the internet and the hundreds of pro-Gaza demonstrations around the world have prevented Israel from taking action on the ground and led the government to lose the support of its population.
On the contrary, what has been observed since October 7 is that, thanks to its communications campaigns, Israel has succeeded in massively rallying its population, which, despite its losses, continues to strongly support the military action, and in creating a wave of solidarity on the part of all Western countries. This situation has had very concrete political consequences: military and economic support from the United States, which has released a special envelope of $14 billion and dispatched an imposing naval air fleet to the region; political support from European leaders, despite criticism and calls for restraint in the face of the death toll in Gaza. As for the Arab countries, many of which were planning to break their normal ties with Israel, none have done so to date. Better still, Israel can count on certain influential members of these countries to relay its narratives.
In the months following October 7, we can therefore consider that the political objectives of Israel’s strategic communication have been achieved. However, as the conflict is taking hold, the effects of this communication strategy could have been challenged by further geopolitical developments, including digital influence campaigns launched by other major players in the conflict such as Qatar. But since then, Israel’s political objectives appear to have evolved as well. The prime minister can rely on a decline in the pro-Palestinian mobilization worldwide, while the return of Donald Trump to power in the United States ensure him strengthened support.
In February 2022, when Russia mounted its full-scale invasion of Ukraine, Western democracies came together in a remarkable show of unity in support of Ukraine and of democratic principles. They referred to Ukraine’s fight as a fight for democracy and a global rules-based order. They imposed sanctions on the aggressor, undertook massive shifts in energy independence and military spending, welcomed large numbers of Ukrainian refugees, and collaborated to provide the beleaguered country with financial assistance, intelligence and strategic support, and military aid – all critical to its survival in its David versus Goliath struggle with its larger, and more powerful, neighbor. As of this writing, two years into the fighting, the situation looks bleaker. In late 2023, Ukraine’s then top general publicly lamented the state of fighting as reaching a “stalemate” in which no major breakthrough was immediately likely. While the two armies stand bogged down in an attritional phase of war where lines move little, Ukraine’s military now runs low on munitions. In the absence of some course correction, observers increasingly point to a long-term Russian advantage. At the same time, discussions of continued aid on both sides of the Atlantic have faced intensified political challenges.
The fight on the battlefield will be crucial to Ukraine’s future – as well as to transatlantic security and democracy, and global order. But this fight itself hinges on the continued unity and collaboration across Western democratic partners that have provided critical aid and assistance to enable Ukraine’s remarkable show of resistance until this point. This coalition is premised on trust and shared values across Ukraine and its democratic partners. It depends upon electoral outcomes, domestic politics, and multinational organization processes across Europe, the United States, and beyond. As such, it has many seams of vulnerability – even without deliberate adversarial action. It has also been targeted persistently by Russian information and influence campaigns throughout the war and before, seeking to undermine unified support for Ukraine’s war effort across Western democracies. In so doing, Moscow seeks also to undermine future confidence in one of the West’s greatest strengths – its strength in numbers of allies and partners willing to collaborate around a shared vision.
In recent years, democracies have faced fundamental challenges in addressing the threat of adversarial disinformation aimed at undermining trust in and functioning of key democratic institutions and processes. These challenges are sufficiently daunting at the national level. They have led to significant, if nascent, innovations to mitigate impacts, foster trust and resilience, and simultaneously protect core democratic values. But the challenge does not stop at national borders. Nondemocratic competitors frequently use coordinated cyber-enabled disinformation and influence campaigns strategically at a regional or international level. They aim to undermine cooperation between democracies, break alliance cohesion, and otherwise use interference in the democratic politics of individual countries to foster broader geopolitical and strategic gains. No event in the past decade better illustrates this challenge than the war in Ukraine and Russia’s multipronged efforts to influence transatlantic cooperation in support of Ukraine. But the war has also fostered new forms of strategic awareness and adaptation.
This chapter examines the role of cyber-enabled information and influence in relation to transatlantic support for Ukraine as a case study of the challenge of addressing transnational coordinated strategic disinformation campaigns. Specifically, the chapter examines Russia’s uses of multipronged disinformation and influence campaigns in attempts to influence transatlantic support for Ukraine by democratic partners and allies, scrutinizing how the Kremlin has targeted within- and cross-national vulnerabilities as part of broader strategic goals. Investigating what has been done by democratic target countries, individually and collaboratively, to mitigate the worst outcomes and potential strategic impacts, this chapter in turn examines existing mechanisms of coordination and cooperation by which democracies have sought to address this strategically motivated threat, their current adequacy to the task, as well as ongoing learning and adaptation in this area.
The remainder of the chapter is divided into three sections. The first section, “Democracies and the Transnational Disinformation Challenge,” examines the nature of today’s strategic, transnational, cyber-enabled information and influence campaigns as a challenge to democracy writ large, and as a particularly complex challenge to international unity and collaboration across democratic partners and allies. The Ukraine war has made this threat more obvious than ever before, demonstrating how such campaigns can seek to undermine critical strategic collaboration between democratic countries. The second section, “Support for Ukraine and Russian Strategic Information Campaigns,” considers the role of a broad coalition of democratic partners in support for Ukraine’s quest for self-determination and democracy, examining how Russian efforts to thwart Ukraine’s defense have extended well beyond the battlefield through extensive orchestrated information and influence campaigns across countries and audiences. These campaigns have targeted not only specific actors or countries but also relationships, institutions, and mechanisms of cooperation across democracies that enable the collaborative support effort. The final section, “Conclusion: Defensive Progress and Challenges,” considers the challenge of democratic defenses and resilience to such strategic disinformation campaigns. While democracies have indeed made some progress, the challenge of addressing complex, transnationally orchestrated campaigns remains particularly daunting. The chapter concludes with a call for more robust and strategically aligned collaboration across democracies and in defense of collaborative institutions.
Democracies and the Transnational Disinformation Challenge
Since revelations of Russian efforts to influence the 2016 United States presidential election, democratic countries have been increasingly aware of the threat posed to their domestic political systems from adversarial cyber-enabled information and influence operations and campaigns. Such operations involve the deliberate use of information and narrative to target individuals, groups, or large segments of society, influence perceptions and decision-making processes, and ultimately gain desired strategic outcomes – often utilized during peacetime or gray zone confrontation without resorting to direct military conflict, but also used to complement above-threshold clashes, both within and outside of the theater of armed conflict (Lin & Kerr, Reference Lin and Kerr2019).
By no means entirely new, this form of competition bears deep commonalities with forms of information competition and deception common during earlier historical periods, with Russian use of these techniques particularly bearing similarity to the Soviet “active measures” of the Cold War period (Rid, Reference Rid2020). But the renewed use of such aggressive information confrontation was nonetheless novel in the post–Cold War environment, and the possibilities were also transformed by the fundamentally different twenty-first-century information environment, including the rise of the global internet, online media and social media ecosystems, and ubiquitous data, digital tools, and algorithms. The new forms of operations drew not only on earlier operational techniques but also on rapidly developing cyber domain capabilities, covert manipulations of online media and social media systems, and the multi-vector spread of false, misleading, or manipulative information and narratives. The variety of techniques demonstrated a degree of flexibility, low cost, and scalability that could potentially be much more widely utilized and emulated – particularly by nondemocratic states.
Adversarial disinformation poses particularly salient challenges for democracies. Efforts to sway and polarize public opinion, spread uncertainty and confusion about factual events, influence or undermine faith in the outcomes of elections, and otherwise weaken trust in key institutions and public figures all constitute challenges to core elements of democratic systems. These states share deep commitments to freedoms of expression, association, and information. They support a free and open internet. And they base their governments’ legitimacy on the sanctity of free and fair elections. The online platforms, digital tools, and media outlets utilized in disinformation campaigns often fall under private ownership, cut across multiple jurisdictions, and have their own interests, rules of service, and technical challenges. Foreign hostile disinformation campaigns frequently interact with existing echo chambers, social cleavages, polarized media, fringe politics, divisive conspiracy theories, home-grown mis- and disinformation currents, and other domestic societal dynamics that are vulnerable to further exploitation and manipulation. The problem, therefore, has not been prone to an easy and immediate solution, despite serious efforts by democracies on both sides of the Atlantic.
While a variety of approaches have been attempted to address these challenges at governmental, whole of society, and even intergovernmental levels, the examination of the long-term strategic goals of adversarial campaigns has often been somewhat underdeveloped. Given the types of granular forensic data necessary to track and attribute disinformation operations, the focus has sometimes necessarily dwelt on specific adversarial behaviors or content forms such as the targeting of particular platforms, audiences, or events, the use of particular techniques of manipulation, or the spread of specific false, biased or misleading narratives, pictures, or other content forms. This is critical and challenging analytical work. But its focus is often necessarily at an operational or a tactical level rather than on strategic analysis. Insofar as specific campaign objectives are identified, these often have focused on local and chronologically proximate events, communities, and institutions. Adversarial uses of disinformation might be tied to efforts to generate an in-person protest event, exacerbate local animus between social groups, or undermine support for a political candidate. But the longer term follow-on effects of these coordinated actions and their impacts beyond the given country’s political system are often not the primary focus of inquiry.
The goals of uncovered operations are often broadly characterized in relation to efforts to undermine elections, sow distrust or confusion, and otherwise weaken democracy. While these broad high-level objectives are likely frequently correct, the focus on domestic politics sometimes has obfuscated attention to likely regional, transnational, or global strategic dimensions of adversarial efforts. By interpreting operations in terms of the local political context, issues, and threats to domestic democratic institutions, there has sometimes been inadequate attention to the wider orchestration of campaigns across multiple national political jurisdictions. In some cases, such campaigns might amount simply to efforts to achieve similar outcomes at scale across larger regions or sets of countries. But, in other cases, the attempt to influence dynamics between countries or regions is also critical – whether aimed at affecting foreign policies toward third parties, bi- or multilateral relations between states, the cohesion or policies of alliances or of regional or international organizations, or some other specific system-level interaction or outcome.
We can discern various examples of such higher level strategic logic at work in Russian uses of disinformation campaigns over more than a decade. While often characterized as efforts to create chaos and uncertainty, impart a sense of nihilism, and stand against something rather than for anything, there is also ample evidence that Russian information and influence campaigns have sought specific regional and international outcomes, including to secure influence across the historic region of the Russian Empire and Soviet Union, undermine European Union (EU) and transatlantic unity and North Atlantic Treaty Organization (NATO) alliance cohesion, diminish international accountability for chemical weapons use, limit uptake of Western COVID-19 vaccines, counter Western security roles in Africa and the Middle East, diminish trust in Western countries across the Global South, win broad support for positions in the United Nations (UN) Global Assembly, and bolster global pro-Russia “conservative” movements and organizational networks.
Russia’s multidimensional use of cyber-enabled information and influence campaigns in relation to efforts to undermine Ukrainian independence stands out as a particularly longstanding and egregious example of such coordinated and long-term efforts across many national jurisdictions. Russia has persistently used a mix of information operations to target Ukraine in support of its own strategic goals since at least the 2013–2014 Euromaidan demonstrations and 2014 Crimea annexation and instigation of conflicts in the Donbas and other regions of Ukraine. This has included direct operations against specific Ukrainian targets and populations such as the 2014 hacking of the Central Election Commission of Ukraine and the attempt to spread doubt and undermine legitimacy of electoral results, the use of non-demarcated military personnel to reduce certainty and speed of attribution processes, and the use of propaganda to sow ethnic tensions and stoke conflict. Efforts have frequently focused on shifting narratives concerning specific events in or pertaining to the conflict with Ukraine, such as the rapid online spread of numerous alternative explanations following the July 2014 downing of the Malaysian Airlines flight MH17 over Ukraine.
Even these efforts focused directly on the Ukraine conflict have utilized a variety of operations and techniques to pursue interrelated goals. A number of digitally enabled techniques have been used, including, for example, the use of spear phishing to gain unauthorized access, hack-and-leak operations – as well as the release of forged documents – to influence public understanding of the conflict, distributed denial of service (DDoS) attacks to block access to information at key moments, and the release of wiper malware and cyberattacks destroying data, damaging the economy, and shutting down parts of the power grid. These technical operations have had informational and psychological elements, aiming to affect public opinion or decision-making dynamics – within Ukraine and Russia, but also sometimes beyond. Narratives have also explicitly been weaponized to exacerbate conflict and undermine support for Kyiv, including discussion of “being ‘brother nations’ with a shared history, religion, and culture; mistreatment of Russian-speaking communities in Ukraine and fear of supposed Ukrainian Naziism, portraying the elected government in Kyiv as illegitimate and violent ‘Banderites’ [or fascists]; and nostalgia for and possibility of reclaiming the former Soviet greatness” (Kerr, Reference Kerr2023). A variety of platforms and channels have been utilized to disseminate these messages, ranging from social media to television to official public statements and involving everything from fake online personas to supposedly independent expert speakers and official state sources and outlets.
The Kremlin’s use of information and influence campaigns in its efforts to subjugate Ukraine has also long involved international components. Operations targeting US and European elections, public opinion, or political decisions have often been examined mostly as threats to domestic political systems, potentially damaging to the targeted country’s national security and democracy. But collectively, many of these efforts have also aimed to support Moscow’s broader objectives in Ukraine, clearly supporting politicians, parties, or agendas viewed as most commensurate with these goals. Even the 2016 election interference, often seen as the origin point of Western concern about democratic vulnerability to disinformation, can also be understood as related closely to Moscow’s efforts to achieve desired outcomes in Ukraine, with the former Republican campaign chief Paul Manafort having specifically served as an advisor to Ukraine’s pro-Russia former president Viktor Yanukovych, and the source of the Robert Mueller investigation and first impeachment trial of former US president Donald Trump relating to his handling of relations with Russia and military aid to Ukraine.
Since Russia’s full-scale invasion of Ukraine in February 2022, the global and multifaceted scope of these efforts has only increased, utilizing coordinated, multifront, often transnational and cyber-enabled disinformation and influence campaigns to achieve its desired objectives in the conflict. This has become all the more important given the complexity and criticality of coordinated Western support for Kyiv’s war effort.
Support for Ukraine and Russian Strategic Disinformation Campaigns
In the opening days of Russia’s war against Ukraine, a broad coalition of democratic countries – including the United States, members of the EU and NATO, and other partners and allies – came together to condemn the invasion and to offer military and economic aid and other forms of assistance. This support has been critical to Ukraine’s ability to defend itself and withstand further Russian conquest during two years of full-scale war. It has involved major contributions by many countries of the transatlantic region and beyond. It has also involved efforts to reinforce and expand European and transatlantic institutions, embrace nonaligned, like-minded countries, and provide pathways toward greater security and integration into the community of democratic states for Ukraine and other non-EU and non-NATO member countries vulnerable to Russian coercion and aggression. All of this has been undertaken against a backdrop of concerns about potential escalation risks, economic and energy interdependencies, and differing levels of military stockpiles, defense industry readiness, and existing policies and cultural attitudes. These efforts have been remarkable, but they also have exposed additional vulnerabilities to Russian disinformation and influence campaigns – which the Kremlin has diligently sought to exploit.
A Transformative Coalition
In the first two years of the war, over forty-one countries have contributed to the support of Ukraine – whether through the sending of arms and ammunition, support for military training, financial assistance, logistic support, acceptance and support of refugees, or other means. These efforts have involved significant financial and economic burdens, changes in policy, and overcoming a panoply of domestic, regional, intraorganizational, and transatlantic obstacles and tensions. Some countries have pledged large percentages of their national heavy weapons and ammunition stockpiles or committed to rapidly producing more as part of their support for Ukraine. Others have accepted large numbers of refugees, served as major logistics and supply hubs, or fundamentally changed aspects of their domestic economies or foreign policies. In addition to the United States and EU and NATO members, this has included G7 countries and others from across Europe and beyond – including contributions, for example, by Japan, South Korea, Australia, New Zealand, and Taiwan (Bomprezzi et al., Reference Bomprezzi, Dyussimbinov, Frank, Kharitonov and Trebeschn.d.).
As of January 15, 2024, partners have committed some $278 billion in aid since the beginning of the war, including $141.06 billion in financial assistance, $19.01 billion in humanitarian assistance, and $117.99 billion in military commitments (Antezza et al., Reference Antezza, Bushnell, Dyussimbinov, Frank, Frank and Frank2024). While the United States has committed $75.4 billion ($46.33 billion of it military aid), Europe (through the EU and bilateral commitments by individual countries) has collectively committed $183.43 billion ($102.499 billion of it financial and $67.68 billion military aid) (O’Hanlon, Stelzenmüller, & Wessel, Reference O’Hanlon, Stelzenmüller and Wessel2024). According to the Kiel Institute’s “Ukraine Support Tracker” statistics, thirty European countries as well as Canada have actually committed higher percentages of gross domestic product (GDP) than the United States (0.32 percent), though these come as proportions of smaller overall economies. A total of fifteen countries have committed more than 1 percent of their total GDP to the aid effort, led by Estonia (4.09 percent), Denmark (3.06 percent), Lithuania (2.04 percent), Norway (1.72 percent), and Latvia (1.67 percent) (Bomprezzi et al., Reference Bomprezzi, Dyussimbinov, Frank, Kharitonov and Trebeschn.d.). Eastern European countries formerly dominated by Russia have made particularly large proportionate commitments. Of the top fifteen contributors by GDP percentage, seven were former communist states.Footnote 1 European countries have also provided significant support in the form of accepting and assisting Ukrainian refugees – led by Germany, Poland, and the Czech Republic, but with fourteen countries accepting refugee numbers greater than 1 percent of their prewar populations.Footnote 2 These countries have also spent substantial amounts on support for the refugees they have taken in, led by Poland spending 3.35 percent of its GDP on this support, followed by the Czech Republic and Bulgaria (each 2.08 percent), Slovakia (1.68 percent), Latvia (1.51 percent), Lithuania (0.98 percent), and Estonia (0.94 percent) (Antezza et al., Reference Antezza, Bushnell, Dyussimbinov, Frank, Frank and Frank2024).
These efforts have involved particularly significant undertakings or dramatic changes in policy by a number of countries critical to the support collaboration. For example, Germany – which has committed the second-most total ($24.2 billion) and military ($19.42 billion) aid of any individual country behind the United States, and a total worth 1.06 percent of its GDP – had to dramatically alter its approach to defense and military production, as well as overturning its trade and energy dependence relations with Russia (Antezza et al., Reference Antezza, Bushnell, Dyussimbinov, Frank, Frank and Frank2024). In his February 27, 2022 speech to the Bundestag, German Chancellor Olaf Scholz called the Russian attack on Ukraine a “Zeitenwende,” or “historic turning point,” and announced significant changes in German defense policy and military spending. While gaps remain between commitments and what has actually been delivered to date, this represents a generationally important change in German defense policy. France has also made some substantial changes to its foreign policy as a result of the war effort. Some observers ridiculed French President Emmanuel Macron’s prewar shuttle diplomacy charm offensive and open line to Putin during the first year of the war, seeing these efforts as potentially placating Russia or pressuring Ukraine into negotiation. But, by December 2022, Macron had declared France’s support for Ukraine “all the way to victory” and France had begun heavy arms deliveries to Ukraine. Subsequent changes suggest an increasing support for inclusive European defensive hardening, including a 40 percent increase in France’s defense budget announced in January 2023. Perhaps more significantly, there is also evidence of a changing French approach to European security – including a more positive stance toward further enlargement of the EU and NATO.Footnote 3
Former Soviet Bloc and nonaligned countries on Europe’s Eastern flank have also undertaken dramatic changes in policy and critical supportive efforts. In addition to the substantial aid contributions and refugee support relative to their GDPs and populations, Baltic and East European states have loudly pressured partners for greater supportive efforts. Poland has played a particularly essential role, serving as a hub for logistics – including the transfer of military and humanitarian aid – with its city Rzeszow providing the closest airport to the Ukrainian border. It has also taken in one of the largest total refugee populations with many more transiting through the country to further destinations. Ukraine’s closest European neighbors have also become conduits for shipment of its agricultural products, while Turkey has played an intermittent role in pressuring Russia to allow their Black Sea export.
In addition to specific efforts to provide immediate aid and support for Ukraine, another significant element of the response by democratic partners has involved efforts to consolidate and reinforce structures of cooperation and security across Europe and the transatlantic region. In the first months of the war, Sweden and Finland – which had long maintained a neutral status in relation to NATO – declared their desire to join the alliance and were formally invited at the Madrid Summit in July 2022. Despite intra-alliance tensions that held up the actual accessions, most NATO members rapidly embraced the further integration of these two partner countries, seen as capable of quickly becoming important contributors to the alliance’s defense posture. Western democracies have also come together in an effort to provide pathways forward to secure democratic futures for Ukraine and other non-NATO and non-EU member countries vulnerable to Russian aggression. At NATO’s Vilnius summit in July 2023, the alliance affirmed language that “Ukraine’s future is in NATO” and agreed that, after the war, Ukraine would not be required to go through the formal process of a “membership action plan,” or MAP.Footnote 4 On December 14, 2023, the EU’s current members voted to begin membership accession negotiations with Ukraine and Moldova, also advancing Georgia to candidate status.Footnote 5
Vulnerabilities and Targeting of Ukraine’s Support
The outpouring of support for Ukraine in the face of Russian hostility has marked an impressive, coordinated response, but it has also demonstrated new seams of vulnerability to Russian information and influence campaigns. While some relate to core democratic institutions and protections and processes across democracies, others are due to the historic, cultural, or political specificities of particular states critical to the support efforts – or capable of playing spoilers in those efforts, such as through the EU or NATO. While the individual support of key states is vital in and of itself, some vulnerabilities are explicitly inter- or transnational, related to alliance or regional institutions, bi- or multilateral dynamics, coordination, trust, and cohesion. Russia has made extensive strategic use of cyber-enabled information and influence campaigns during the first two years of the war, including targeting these vulnerabilities in Ukraine’s support coalition. Coordinated campaigns across different national and regional audiences have targeted not only specific countries key to the support effort but also the very relationships, organizations, and mechanisms that have been most critical in enabling a collective unified response.
A review of several known instances of Russian disinformation and influence operations since the start of the full-scale war with Ukraine readily demonstrates this strategic sophistication, both in target selection and often also in coordination of tactics and narratives across jurisdictions. We see this first, for example, in operations targeting countries that are leading providers of military, financial, or humanitarian support. Operations across these countries use similar but tailored tactics, aiming to bolster challenges to support for Ukraine and increase conciliatory positions toward Russia across the right and left extremes of the political spectrum, and spread narratives undermining public support for Ukraine or emphasizing associated risks and harms to national well-being. These efforts began long before the full-scale invasion, cultivating networks of relationships with extreme political parties and movements, fringe media outlets, organizations, politicians, journalists, activists, influencers, and other public figures. They also long involved cultivating digital ecosystems for the laundering and spread of interrelated but tailored disinformation across many platforms.
Since the invasion, not only has the intensity of these efforts increased but they have also focused much more explicitly on the goal of undermining support for Ukraine. Russian cyber-enabled information and influence campaigns seeking to affect the course of the war have targeted numerous audiences, including within Ukraine itself, the Russian public, the Global South, and the West. Building on the relationships, audiences, and techniques cultivated through earlier campaigns and operations, they have sought to “amplify divisive local issues and voices,” “tailor[ing] disinformation and narratives for specific audiences.” They have used a combination of official channels, “state-backed media outlets,” “Russia-linked actors,” and organizations, and “coordinated inauthentic activity on social media platforms.” They have leveraged techniques such as “hack-and-leak operations,” “falsified imagery,” fake copies of trusted websites, and automated systems for the dissemination, laundering, and amplification of fake stories across multiple media ecosystems (Atlantic Council Digital Forensic Research Lab (DFRLab), 2023; Huntley, Reference Huntley2023; Insikt Group, 2022; Kerr, Reference Kerr2023, p. 17). Through a mix of narratives and covert influence campaigns across numerous countries, they have aimed to discredit the legitimacy or capability of the Ukrainian government and military, spread doubt and fear concerning pro-Ukraine policies, encourage conciliatory approaches to Russia, and foment grievances and division between Ukraine’s supporters (Bronk, Collins, & Wallach, Reference Bronk, Collins and Wallach2022; Kerr, Reference Kerr2023). Narratives of particular prevalence have included those seeking to stoke hostility, fear, and resentment toward Ukrainian refugees; blame Russian sanctions and aid to Ukraine for local economic and social hardships; legitimize Russian actions and blame Ukraine, NATO, or the United States for the war; raise the specter of military escalation to involve supporting countries, nuclear, or other catastrophic threats; and sow distrust in mainstream parties and media of Western democracies as “biased or untruthful” (Aleksejeva, Reference Aleksejeva2023; Kerr, Reference Kerr2023).
Studies both prior to and during the full-scale war show Russia building vast networks for the distribution of targeted disinformation and cultivation and manipulation of supportive actors and publics across Europe and the United States, often appearing to particularly emphasize those countries most critical to the Ukraine support effort. For example, the EUvsDisinfo Database – a project of the European External Action Service (EEAS) East StratCom Taskforce – found that Germany was the European country most targeted by disinformation campaigns recorded in their data for the 2015–2021 period leading up to the war, with 700 cases collected. During this same period, their data showed France as the second-most targeted, with 300 cases. Comparative studies by the EU DisinfoLab since the full-scale invasion likewise point to Germany as the most-targeted EU member state during the conflict, while major leaks of Kremlin correspondence and investigative studies have demonstrated explicitly planned efforts to target both French and German public opinion and politics to undermine support for Ukraine.
Operations in both Germany and France have sought to instrumentalize fringe political parties and movements to spread narratives undercutting support for Ukraine across receptive audiences and promote policies serving Russia’s strategic goals. In Germany, for example, leaks and traced online campaigns show Moscow attempting to utilize pro-Russian media and social media as well as networks of collaborative relationships and clandestine operatives to mobilize both left- and right-wing activists, political parties, and sympathetic segments of the population around opposition to support for Ukraine – in some cases specifically seeking to bring together both ends of the political spectrum around common narratives and grievances. Some of these operations have long roots going back well before the 2022 invasion. Prior to the full-scale war, German-language Russian state-backed media had an established presence in the German media system, including outlets such as RT Deutsch and Sputnik, which focused their coverage on polarizing issues such as immigration and EU skepticism, producing content that was often picked up by – and frequently supportive of – extreme populist political parties and campaigns (Hadeed & Sus, Reference Hadeed and Sus2023; Spahn, Reference Spahn2020). By mid-decade, Russian disinformation often targeted Angela Merkel and her policies, including her stance on migration. In the infamous 2016 “Lisa F.” case, a hoax about a thirteen-year-old Russian–German girl having allegedly been kidnapped and assaulted by men of Arab origin was amplified by Russian media leading to anti-migrant demonstrations and an attack on a refugee reception center by German neo-Nazis.
Since Germany’s blocking of RT shortly prior to the Russian invasion of Ukraine in February 2022, a number of pro-Russian German blogs and independent media sources on platforms such as Telegram have achieved wide audiences spreading Kremlin narratives (DW, 2022; Kastner & Hewson, Reference Kastner and Hewson2023). Analysis by Reuters has identified twenty-seven Telegram channels that “reshare and boost pro-Kremlin messages to a combined audience of about 1.5 million subscribers,” including flattering images of Putin and warnings about the risk of nuclear escalation (Nikolskaya et al., Reference Nikolskaya, Saito, Tsvetkova and Zverev2023). German researchers have also shown the Russian government is more often using official channels of communication such as Foreign Ministry and Embassy websites, social media, and press releases to spread disinformation (Organisation for Economic Co-operation and Development (OECD), 2022). In one prominent August 2022 example of the use of these mechanisms to spread inflammatory disinformation against support for Ukraine, an out-of-context video clip of German Foreign Minister Annalena Baerbock portrayed her as declaring her intention to continue support for Ukraine even against the will and interests of German voters. The clip was picked up first on pro-Russian Telegram accounts and then propagated further across mainstream social media platforms, spread by accounts connected with extremists and fringe political party members, and leading to a trending hashtag on German-language Twitter (now X) calling for Baerbock’s resignation (Delcker, Reference Delcker2022).
The Kremlin has long sought to build closer relations with German fringe politicians and activists. The hard-right Alternative für Deutschland (AfD) party has been a particularly pronounced beneficiary of Moscow’s largess. Prior to and since the beginning of the full-scale war, several AfD members have been invited on trips to meet with Kremlin or Kremlin-aligned figures in Moscow, Belarus, Kyiv, and Russian-occupied regions of Ukraine. At the same time, Kremlin efforts also have aimed to stoke support for left-wing activism such as peace and anti-NATO movements. Investigations have traced Kremlin efforts to encourage alliance between the AfD and the far-left party, Die Linke, specifically appearing to target Die Linke parliamentarian Sahra Wagenknecht, who has led protests opposed to supplying arms to Ukraine.Footnote 6 Reuters investigations have also indicated that the Russian agency Rossotrudnichestvo is also directly coordinating a “network of agents” – some under false German identities – involved in promoting Kremlin narratives in protests and civil society events within Germany (Nikolskaya et al., Reference Nikolskaya, Saito, Tsvetkova and Zverev2023). After the beginning of the war, the AfD’s popularity has soared nationally, doubling between fall 2022 and 2023 an average of 22 percent, with much higher numbers in some Eastern regions.Footnote 7 Berlin’s Center for Monitoring, Analysis, and Strategy (CeMAS) research also showed a significant increase in “approval of pro-Russian propaganda narratives” between spring and fall of 2022 (Kastner & Hewson, Reference Kastner and Hewson2023). The exact impact of Russian information and influence campaigns on such shifts in polling and public opinion is difficult to establish, as these efforts often target already vulnerable populations and seek to exacerbate preexisting grievances and dynamics. But these trends could prove concerning in relation to long-term support for Ukraine.
In France, Russian disinformation and influence operations have also sought to leverage fringe parties, organizations, and public opinion to influence policy. Campaigns targeting France since the start of the full-scale war, similarly to those in Germany, have built on longstanding efforts. Russia has, for example, long cultivated a relationship with Marie Le Pen’s far-right party, the National Rally (formerly National Front), with past loan interactions suggesting financial support from Russia, Le Pen’s rhetoric emphasizing negative impacts of sanctions on the French economy, and the party voting against or abstaining from support for Ukraine and advocating better relations with Russia. National Rally member and European parliamentarian, Thierry Mariani, also leads the Association for Franco-Russian Dialogue, an organization which promotes pro-Russian positions in Paris. Mariani has made frequent trips to Russia, served as an election observer in the Donbas in 2018, and led National Rally delegations to Crimea. He pushes against Western support for Ukraine and sanctions of Russia, promoting negative views of the Ukrainian government and the United States. Jean-Luc Schaffhauser, a former member of the European Parliament for Le Pen’s party, has sought to “[launch] a foundation with Moscow’s backing that would advocate for a cease-fire in Ukraine, with the Kremlin maintaining its grip on the country’s eastern regions” (Belton, Reference Belton2023). Schaffhauser, who arranged the loans to support Le Pen’s prior presidential bid, also plans to back a slate of far-right candidates to the 2024 EU Parliament who seek closer relations with Russia.Footnote 8
Russian efforts in France appear to have been tailored to emphasize particular narratives thought to be most resonant and play off of existing divisions and vulnerabilities in French society. In 2023, a French parliamentary inquiry issued a report noting Russia’s “long-term disinformation campaign” and efforts to “defend and promote Russian interests” and “polarize [France’s] democratic society” (Belton, Reference Belton2023). A December 2023 The Washington Post investigative report examined how leaked Kremlin documents demonstrate explicit plans in this regard led by First Deputy Chief of Staff to the Putin Administration, Sergei Kiriyenko. The documents show an effort to utilize social media, politicians, activists, and other public figures to heighten domestic political tension, challenge NATO cohesion, and weaken support for Ukraine. Leaked 2022 Kremlin presentations give a window into the strategic thinking, expressing a belief that France would be “vulnerable to political turmoil.” They cited polling numbers showing the French public – at 30 percent – had the second-highest positive view of RussiaFootnote 9 of any country in Western Europe (behind Italy), with “[40 percent] inclined not to believe reporting on Ukraine by France’s own mass media.” Kiriyenko’s team encouraged promoting narratives about the negative impacts of the sanctions on Russia leading to “social and economic crisis,” the inappropriateness of “paying for another country’s war,” suggesting nefarious US use of the war “as an instrument to weaken Russia’s position in Europe,” the risk of European embroilment in a “World War III,” and the need for “dialogue with Russia” to develop a “common European security architecture” (Belton, Reference Belton2023). Other investigations suggest ongoing Russian efforts to exacerbate tensions around immigrants and ethno-religious difference in French society.Footnote 10
Use of digital platforms and tools has been critical in Russia’s efforts to spread disinformation and influence in France, as it has been in Germany and elsewhere. The leaked Kremlin documents showed direct instructions for the use of troll farms and false French personas in targeting French society (Belton, Reference Belton2023). In June 2023, French authorities announced the discovery of a broad Russian disinformation campaign, dubbed “Doppelgänger,” involving the production of fake duplicates of official government and media websites – including those of the French Foreign Ministry and the newspaper Le Monde – then using these impersonations to disseminate false content supportive of Russia and critical of Ukraine and its support. The French Foreign Ministry attributed the operation to Russia, pointing to “numerous elements revealing the involvement of Russian or Russian-speaking individuals and many Russian companies … [and] many state [or state-affiliated] entities.” The Russian companies Struktura and Social Design Agency were identified as “running [the] operation,” while an anonymous news agency, Recent Reliable News (RRN), provided a repository of propaganda content for its use (EU Disinfo Lab, n.d.). The operation similarly targeted Germany, Italy, Ukraine, and the United Kingdom (Reynaud & Leloup, Reference Reynaud and Leloup2023).
In early February 2024, in another major revelation concerning Russia’s at-scale use of digital tools to spread disinformation, the French government revealed a vast network of Russian sites that were “structured and coordinated” to spread Kremlin propaganda across multiple countries supporting Ukraine, including France, Germany, Poland, and the United States.Footnote 11 The network, dubbed “Portal Kombat” had been discovered by Viginum, the French government organization tasked with confronting foreign disinformation. The network included 193 sites set up to disseminate materials in multiple languages and jurisdictions. A sub-portion of the network had spread over 150,000 articles during a three-month period of June–September 2023, tracked by researchers. The materials disseminated primarily focused on justifying Russia’s war against Ukraine and spreading false or misleading information and narratives to do so (Le Temps, 2024; Willsher & O’Carroll, Reference Willsher and O’Carroll2024).
Russian campaigns to undermine support for Ukraine have also targeted Poland, one of Ukraine’s most invaluable supporters and closest neighbors. As with other cases, the prehistory of the recent disinformation and influence efforts stretches back well before the full-scale invasion. While a significant wariness of Russia and potential Kremlin interference or aggression has long been a hallmark of Polish domestic politics – including on the far-rightFootnote 12 –evidence suggests that Russia has attempted a similar cultivation of fringe political party connections and exploitation of local grievances as it has in the other countries it has targeted. Leaked emails have shown Russian cultivation of relationships with some fringe Polish politicians at least as far back as 2015, utilizing connections with the Zmiana (“Change”) party, for example, to prompt demonstrations of support for Russia’s annexation of Crimea (Agence France-Presse in Warsaw, 2016; Foy, Reference Foy2015; Glos Koszalinski, 2023; Morozova & Szczygieł, Reference Morozova and Szczygieł2023; Pacula, Reference Pacula2015). In 2019, only three months after its formation, Konfederacja (“Confederation”) – “an openly pro-Russia party” – won 4.6 percent of the vote for Polish representation in the European Parliament. While this meant it did not reach the 5 percent threshold to win seats, it finished as the fourth strongest party and its fortunes have continued to grow.Footnote 13
Pre-2022 Russian information and influence campaigns in Poland leveraged both overt state-backed sources and covert networks – often involving social media.Footnote 14 They also used illicit cyber campaigns involving phishing and hack-and-leak operations.Footnote 15 Particularly prominent themes, which have had continuing relevance during the war, included Polish loss of sovereignty to the EU, US, or other foreign actors; Polish “‘Russophobia’ and anti-Russian actions”; “Polish aggressiveness and imperialist intentions,” including desires to turn Belarus and Ukraine into “vassals”; “economic difficulties” resulting from this subservience, hatred, and aggression; and amplifying historical grievances and false or misleading “history-related claims,” including about Polish responsibility for World War II, and inadequate Polish gratitude for the Soviet liberation of Poland (Nemečkayová, Reference Nemečkayová2023). Other prominent narratives of Russian pre-2022 disinformation in Poland included those about the COVID-19 pandemicFootnote 16 and migration and refugees.Footnote 17
Since February 2022, Russian information and influence operations in Poland have built on these earlier efforts. In a rapid shift following the start of the full-scale war, many of the social media accounts and group pages that had been most active in promoting COVID-19 disinformation (or other themes such as conspiracies around 5G) rapidly switched to align against Ukraine and the war support effort. Hundreds of new anonymous accounts also appeared suddenly and began to comment on content related to Russia and Ukraine.Footnote 18 Disinformation about the war has focused with particular intensity on undermining Polish relations with Ukraine and attitudes toward Ukrainian immigrants, but also justifying Russian actions, questioning economic and social consequences of support for Ukraine, promoting pacifism and anti-war sentiment, and amplifying tensions between East and West European allies. Narratives about Ukrainian refugees have suggested that the refugees are usurping privileges of Polish citizens such as access to jobs, housing, hospitals or schools, stirring resentment. They also have sought to induce fear and anger by falsely suggesting that the refugees are mostly young men rather than women and children, and that they are dangerous, engaging in criminal activities.Footnote 19 Some propaganda narratives play up historical grievances and ethnic tensions (Insikt Group, 2022, p. 5) – such as recalling unresolved animosities around the World War II Volhynian massacre, or even suggesting Polish imperialist ambitions in Ukraine.Footnote 20 Other content seeks to amplify broader anti-Ukrainian sentiments – whether through suggesting that Ukrainians are all “Banderites,” Nazis, fascists, nationalists, or extremists, seeking to discredit the Ukrainian government, or blaming Ukraine and NATO for inciting the conflict (Tymińska, Korpal, & Sek, Reference Nabozhniak, Tsekhanovska, Castagna, Khutkyy and Melenchuk2023; Zadroga, Reference Zadroga2023). Research on Ukrainian social media since the start of the war shows some of the anti-Ukrainian rhetoric to be taking off – including the popularization of the hashtag #StopUkrainizacjiPolski (“#StopUkrainizationPoland”), driven in part by the at-scale deployment of inauthentic, automated, Russian disinformation-spreading accounts.Footnote 21
Recent shifts in public opinion polling, political rhetoric, and protest dynamics within Poland suggest that – though support for Ukraine is still relatively strong – the audience for some of these Kremlin messages may be growing. Public opinion studies conducted in the fall of 2022 and early 2023 already found Poles – particularly in poorer regions of the country – growing more critical of the economic effects of support for Ukrainian refugees, with approximately a third of the population believing that “the war in Ukraine was the result of [the same] global liberal conspiracy […] that caused the COVID-19 pandemic,” and that “Ukrainians residing in Poland were not refugees but economic migrants.” Approximately two-thirds of respondents thought that Poland “‘cannot afford’ the presence of war exiles” (Mazzini, Reference Mazzini2024). In 2022–2023, polling showed the right-wing party Confederation’s popularity rising rapidly, polling third with 10 percent support by April 2023, and reaching 16.9 percent by July, prompting much speculation that they would prove “kingmakers” in Poland’s October 15, 2023, parliamentary elections, potentially helping then-ruling Law and Justice party (PiS) form a coalition government. While this did not occur,Footnote 22 the campaign saw the Confederation and PiS candidates competing to be the loudest “defenders of Polish sovereignty,” with slogans of “no more welfare for Ukrainians” and that “the needs of Polish citizens need to come first.” The year 2023 also saw the emergence of the Polski Ruch Antywojenny (Polish Anti-War Movement) or PRA, which led a march in Warsaw in May 2023 and popularized the hashtag/slogan “#niemojawojna” (Not My War).Footnote 23 Poland has also seen several waves of mass protests by farmers, angered by the competition created by the import of inexpensive Ukrainian grain. The apparent role of the Confederation party in orchestrating these protests as well as some pro-Russia displays suggest potential growing alignment between the galvanized farming communities and Russian narratives.Footnote 24
Equally important to the targeting of key countries to Ukraine’s support, Russia’s campaigns appear to also target key relationships between countries, cohesion of and trust in multinational organizations, and mechanisms of cooperation playing critical roles in the support processes. These have included efforts, for example, to sow discord in bi- and multilateral relations between parts within Europe, to diminish trust and cooperation in the transatlantic relationship between Europe and the United States, to weaken the cohesion and amplify distrust in the EU and NATO, and to reduce the confidence of supporters in Ukrainian leadership and capability. Such campaigns often make use of multiple coordinated operations across different countries, stressing parallel themes of discord, leveraging transnational networks, and laundering disinformation through fringe media and social media ecosystems across different languages.
We see this, for example, in the Kremlin’s efforts to amplify rifts between Eastern-Central and Western Europe. Already by July 2022, Recorded Futures had tracked RT stories “repeatedly stressing extreme political divisions between Poland and Germany or insoluble disagreements between Poland and the Baltics versus Germany, France, and Turkey, suggesting radically different stakes or positions in relation to the war.”Footnote 25 Also noteworthy are attempts to undermine European trust for the United States and its motives in rallying transatlantic support for Ukraine. An incident in September 2022 involving efforts to publicize a fake RAND Corporation report shows the lengths that Russian operations have gone to in order to launder disinformation and help it to gain traction in relevant echo chambers across countries and language barriers. The inauthentic report was a document supposedly prepared at the behest of the US government and laid out a plan for using the pretext of a US-provoked war between Russia and Ukraine as a mechanism to weaken Germany and the EU, while strengthening the United States. The report and stories about it were publicized by fringe outlets in Germany and then Sweden, discussed in connected YouTube videos, and then written about by RT and promoted by social media of the Russian Embassy in Sweden.Footnote 26
Equally complex Russian disinformation and influence campaigns have sought to disrupt NATO and EU processes and cohesion. Sweden’s and Finland’s 2022 decisions to apply for NATO membership and the subsequent accession processes became targets of opportunity for such campaigns. Russian disinformation narratives tracked in Finland have, for example, “argue[d] that Finland has been pressured by the United States to apply to the alliance and that the majority of Finns oppose the NATO membership.”Footnote 27 The campaigns to slow or undermine Sweden’s NATO bid appear to have been particularly complex, spanning several languages and countries, and plausibly involving covert encouragement of Koran burnings and other protest actions in Sweden as well as online campaigns. In July 2023, the Swedish government announced that Sweden had been targeted by “Russia-backed actors […] amplifying incorrect statements such as that the Swedish state is behind the desecration of holy scriptures.”Footnote 28 Sweden’s Psychological Defence Agency drew attention to the fact that RT and Sputnik had published numerous articles in Arabic about the Koran burnings, and that their agency had tracked millions of social media posts in Arabic and other languages, apparently part of a campaign to portray Sweden as an “Islamophobic” country that supports Koran burning, to create tensions with domestic minorities and backlash in Muslim-majority countries (protesters stormed and vandalized the Swedish Embassy in Baghdad). Most significantly, these campaigns amplified a pretext for continued Turkish (and to a lesser extent Hungarian) resistance to Sweden’s NATO accession bid.Footnote 29
Conclusion: Defensive Progress and Challenges
In February 2024, when the French government issued an announcement about its discovery of a vast network of sites disseminating disinformation about the war in Ukraine in multiple languages across Europe, it did not do so alone. Instead, the “Portal Kombat” discovery was revealed in a joint press conference of the French, Polish, and German foreign ministers. In this collaborative “Weimar Triangle” format, the three countries – each key actors in support for Ukraine – also announced the launch of a joint alert mechanism to enable intelligence sharing and coordination around “unacceptable interference,” such as Russian disinformation campaigns. The countries also agreed to each do more to “pressure online platforms to counter these influence operations.” “We are in a period of vulnerability with the European elections,” noted the French Foreign Minister Stephane Sejourne, drawing attention to the June vote for the European Parliament (Irish, Reference Irish2024; Le Temps, 2024). While this recent move is certainly not the first effort by Western democracies to find common cause in fighting transnational flows of foreign disinformation, much more such coordination will be necessary to succeed in mitigating the threat.
Western threat perceptions around cyber-enabled disinformation have developed gradually since at least 2016, prompting various forms of experimentation to address the problem. Following initial surprise, states were relatively quick to recognize the strategic nature of the threat with its potential to undermine democratic elections and decision processes. But much of the initial focus fell on developing the necessary capabilities and collaborative relationships to enable tracking and, where possible, disrupting particular operations on single or multiple platforms. This included developing research centers focused on identification and tracking, building mechanisms for multistakeholder and whole-of-society engagement, ironing out relevant areas of law many of which differ across jurisdictions, and, in some cases, creating government entities tasked with oversight on the issue.
While a full review of these developments is beyond the scope of this chapter, we see examples of the significant headway that has been made in addressing the disinformation challenge in the various governmental and nongovernmental institutions, private sector reports, open-source intelligence trackers, and other data sources cited. We are no longer operating in the dark with regard to many of the ongoing operations and campaigns. The number of trustworthy organizations engaged in researching and publicizing information concerning cyber-enabled disinformation operations has skyrocketed in Europe and the United States over the past eight years. This includes programs dedicated to tracking, exposing, and counteracting foreign disinformation campaigns, from think tank and university centers to investigative media outlets and civil society organizations, and from private sector entities to government bodies. Several countries have dedicated government bodies focused on addressing the issue at home or abroad.
But we also see here some of the gaps that have developed in Western efforts to address the problem. At first by necessity, much of this work has happened at a highly granular and forensic level – concerned with tracking particular identified operations. Also, because of the early focus on known operations targeting either domestic elections or destabilizing areas of domestic politics, the lion’s share of attention has examined these national-level dynamics as end-goal targets of the cyber-enabled influence campaigns tracked. This has been challenging enough, furthermore, with national government responses often hamstrung to some degree by the phenomenon’s intersection with thorny areas of domestic politics and the entanglement of foreign disinformation campaigns with domestic actors and their freedoms of expression and organizing. In other words, identifying and attempting to mitigate national and subnational operational and tactical dimensions of disinformation campaigns have posed sufficient challenges. So, the strategic orchestration of campaigns across multiple national jurisdictions – including to impact long-term interactive outcomes beyond the particular countries targeted – was, at first at least, an underappreciated dimension of the threat.
Democratic countries have thus made substantial progress in building innovative institutions and mechanisms to address the problem of foreign disinformation – including innovative defensive measures by governments, the private sector, and civil society. But these efforts still face many significant difficulties even at the national level. At the same time, evident transnational dimensions of disinformation campaigns around the COVID-19 pandemic and the war in Ukraine have drawn growing attention to the criticality of coordination on this issue across democracies in order to protect common interests, relationships, and capacity to successfully collaborate to achieve shared strategic goals.
It is not too late to better protect these objectives vis-à-vis Ukraine, but the timing is urgent. Finding adequate collaborative solutions to protect Ukraine’s support coalition from adversarial disinformation will also prove broadly applicable and critical in the next phases of global strategic competition that will shape the future success of the international rule-based order and democracy.
In contemporary societies, the pursuit of democratic ideals is a common theme, yet the practical implementation of democracy can vary significantly, especially in non-Western cultures. This chapter delves into the Russian context, where the struggle for democracy persists in a pervasive totalitarian environment. Although Russia’s attempts at democracy, initiated in the early 1990s following the dissolution of the Soviet Union, have largely failed (Graham, Reference Graham2023; McFaul, Reference McFaul2021), many Russian citizens, particularly the younger generations, maintain hope for shedding the yoke of totalitarian rule. They engage in political activism, often through innovative means. This chapter examines how Russian citizens leverage cyber technologies, participatory media, and humor in their pursuit of democratic change. Focusing on their use of cyber humor on social media, particularly in relation to the 2022 Russian invasion in Ukraine, we aim to understand how citizens engage in democratic practices despite severe constraints on freedom of speech and dynamic information sharing. Our investigation is guided by the following research questions:
How do Russian citizens employ humor as a vehicle for democratic change?
What political functions does humor serve in the context of the ongoing Russo-Ukrainian war?
How does the nature of humor differ between individuals advocating for democratic change and those supporting the autocratic regime?
We postulate that prodemocratic actors employ humor in three key ways: (1) to challenge the regime by countering its propaganda and disinformation; (2) to create a space for broad political engagement and discussion; and (3) to nurture a digital community that upholds democratic values and mitigates the sense of isolation and loneliness among its members.
The Value and Functions of Humor and Satire
As Nietzsche observed, “Man alone suffers so excruciatingly in the world that he was compelled to invent laughter” (as quoted in Gruner, Reference Gruner1978, p. 2). Humor has evolved in humans as a means of coping with stress and adversity; by laughing at threatening events and stressful situations, we are able to displace feelings of anxiety, depression, and anger (Martin, Reference Martin2007). Many humor theorists have pointed generally to the relief provided through humor, be it from physiological, psychological, or social constraints (Derks, Reference Derks, Chapman and Foot1996; Holland, Reference Holland1982; Koller, Reference Koller1988; Martin, Reference Martin2007; Mulkay, Reference Mulkay1988). Release theories of humor suggest it relieves tension and suffering by providing physiological restoration as well as psychological and emotional relief (Chafe, Reference Chafe2007; Davis, Reference Davis1993; Hurley, Dennett, & Adams, Reference Hurley, Dennett and Adams2011; Keith-Spiegel, Reference Keith-Spiegel, Goldstein and McGhee1972; Meyer, Reference Meyer2000).
Comic relief can keep us from acting or thinking seriously when serious responses would be counterproductive (Chafe, Reference Chafe2007), possibly risky (Martin, Reference Martin2007), or socially restricted (Keith-Spiegel, Reference Keith-Spiegel, Goldstein and McGhee1972). Cohen (Reference Cohen1999) considers this response an expression of humanity to “keep us from dying of fear or going mad” (p. 41), and Highet (Reference Highet1962) likens it to a medicinal effect, both in healing the wounds of social injustice and providing inoculation against further painful truths. As Koller (Reference Koller1988) noted, “Humor is viewed as a powerful mechanism to help individuals endure, to cope with, and to move beyond – in a word, survive – those difficulties, problems, and issues confronting them because humanity has contrived social systems and procedures that are inadequate and imperfect in meeting human needs” (p. 1).
Humorous responses also can create a safe space for social groups to confront difficult circumstances and build solidarity (Fox, Reference Fox and Amarasingam2011; Boxman-Shabtai & Shifman, Reference Boxman-Shabtai and Shifman2014; Martin, Reference Martin2007), particularly in times of stress when competing with other groups (Giles et al., Reference Giles, Bourhis, Gadfield, Davies and Davies1996). In this way, humor can serve a dual purpose of uniting and building solidarity among those who share it while targeting others outside the group (Boxman-Shabtai & Shifman, Reference Boxman-Shabtai and Shifman2014). Humor targeting others, particularly out-group members, is often used to establish the in-group’s superiority, a response that not only unites but also mobilizes and creates a stronger sense of identity (La Fave, Hadddad, & Maesen, Reference La Fave, Hadddad, Maesen, Chapman and Foot1996; Martin, Reference Martin2007; Zillmann & Cantor, Reference Zillmann, Cantor, Chapman and Foot1996).
Jokes targeting others in positions of power will likely employ satire rather than superiority. By criticizing, undermining, challenging, and devaluing people or institutions in power, satire aims to fix social and political problems by using humor to hold those in power accountable (Martin, Reference Martin2007; Fox & Steinberg, Reference Fox and Steinberg2020; Highet, Reference Highet1962). Satire helps to resolve structural tensions (Mulkay, Reference Mulkay1988) and to restore social norms and values and a sense of morality (Davis, Reference Davis1993; Koller, Reference Koller1988).
While humor has been studied in the context of civil uprisings and repressive regimes, such as the Arab Spring and the uprisings in Syria (Anagondahalli & Khamis, Reference Anagondahalli and Khamis2014; Elsayed, Reference Elsayed2016; Hatab, Reference Hatab2016), and historically throughout the former Soviet empire (Davies, Reference Davies2007, Reference Davies2010), neither its impact on information behavior nor humor as a distinguishing feature specifically of Russian propaganda and anti-war resistance narratives have been subject to the same level of examination. Such studies have not been done in the context of the Russian-speaking public sphere. Considering the hyper-connected nature of digital environments and the attempts of the Russian government to exert more control over both its domestic and international cyberspace (Wilde & Sherman, Reference Wilde and Sherman2022), it becomes crucial to capture and understand its patterns of domestic communication and resistance to it in attempts to limit the spread of disinformation and, broader, to enable cyber peace. Identifying distinguishing features of humor in Russian online spaces will create better knowledge of the role of humor in countering propaganda and may help identify opportunities to subvert undesirable messages and amplify desirable ones.
Our core underlying hypotheses are that (1) pro- and anti-war narratives will have marked differences in their use of humor-related techniques and (2) those differences can be leveraged to detect propaganda and counternarratives and increase the effectiveness of communication responding to disinformation.
Political Speech in Russia
The current political situation in Russia is characterized by a high concentration of power in the hands of President Vladimir Putin and the increasing crackdown on any form of dissent, which began soon after the start of the Russo-Ukrainian war in 2014 (Freedom House, n.d.). Attempts to suppress and manipulate public opinion have been observed in Russia since the 2010s, when over fifty new laws were passed to expand censorship and surveillance, restrict freedom of speech, and prevent citizens from demanding free and transparent elections (International Federation for Human Rights, 2018). The laws enabled the government, among other things, to block websites without a court decision, monitor users of messaging services, prosecute citizens for their opinions, and suppress activities of any organization or individual under the “foreign agent” label (FIDH, n.d.).
In 2012, the law restricted the right of Russian citizens to gather in public places and introduced stricter punishment for organizing such gatherings, particularly if it leads to the disturbance of public order or interferes with traffic. Loose interpretations of public events provided law enforcement with greater freedom to detain, arrest, and fine Russian citizens, creating more fear and reluctance to participate in any public expression of opinion. The law aimed to punish not only organizers and participants but also those who disseminated information about the event (OVD-Info, n.d.).
In 2016, another round of legislation changes further restricted political and religious freedom by expanding interpretations of and punishments for extremism and terrorism and requiring telecom companies to store data on Russian servers and providing the state with access to it. The arrest and detention of leading opposition figure Alexei Navalny in January 2021 resulted in some of the largest protests in Russia and thousands of people were detained and arrested daily (Human Rights Watch, 2021). The police have also detained and threatened journalists who reported on the events, and pressured students and activists to withdraw from participation in protests and further actions (OVD-Info, 2021).
The political situation worsened in February 2022, when Russia initiated a full-scale invasion of Ukraine. Calling it a special military operation (SVO) to downplay the scale and intent, Russia occupied the eastern Ukrainian territories, launched air and missile attacks on various Ukrainian cities, including Kyiv, Odesa, Kherson, and Zaporizhzhia, and caused thousands of casualties, massive displacement, and a humanitarian crisis. Days after Russia invaded Ukraine, the Russian government made it a crime to oppose the war in public and introduced large fines and up to fifteen years in prison for spreading “false” information about Russia, its actions abroad, and its army (Tsvetkova, Reference Tsvetkova2023).
“Foreign agent” legislation is used to suppress dissent – many independent media organizations and intellectuals have been designated as foreign agents, which cuts their funding, undermines their reputation and effectively removes them from the public sphere. In May 2023, the list contained almost 600 entries and included many prominent nongovernmental organizations (NGOs) as well as journalists, politicians, writers, and artists (Gogov.ru, 2022). In addition, a new national facial recognition database was expanded, increasing Russia’s digital surveillance power over its citizens (Salaru, Reference Salaru2022). Since then, it continues to repress dissent, putting Russian citizens in jail for such offenses as holding a poster, sharing a news article on social media, or even organizing a theater performance.
In 2022 and 2023, the Russian government continued to undercut the prominent voices of opposition. In addition to arresting Navalny, they arrested Ilya Yashin and Vladimir Kara-Murza, sentencing them to eight and twenty-five years in prison, respectively (de Vogel, Reference de Vogel2023). In March 2022, the homes of the former leaders of “Memorial,” an international human rights organization founded in Russia, were raided and cases were opened against them for discrediting and rehabilitating Nazism. Novaya Gazeta, an independent newspaper, ceased publication in March 2022 and had its license revoked. International NGOs including Amnesty International, Human Rights Watch, and Transparency International have ended operations in Russia, and in 2022 at least twenty-two organizations were classified as “undesirable,” which functionally forces them to cease operations or move abroad.
Protest exists, but in small and subversive forms. Within Russia they are most likely taking the form of a “dissident kitchen,” a quasi-protest form coming from the 1950s to the 1960s Soviet times, when intelligentsia used to gather in apartments in small kitchens for a free exchange of ideas and information, including arts, scientific knowledge, and political information (Sakharov Center, n.d.). The Telegram platform and its personal channels that multiple people can subscribe to have become a form of “digital kitchen” as most other Western social media platforms have been banned in Russia. Outside of Russia, many movements try to organize and use digital platforms to protest, but their voices are rather weak and decentralized, with a significant amount of internal disagreement (Ziener, Reference Ziener2023).
Humor as a Political Tool in Russia
Humor can function as a form of resistance in attacking the status quo, from institutions to individuals in power (Lynch, Reference Lynch2002), including, specifically, communist leaders and political parties (Davies, Reference Davies2007). Humor has played an important role in resistance and political activism in Russia in the context of authoritarian rule and restrictions on freedom of expression. Political activists use satire, irony, performative art, ridicule, and humorous withdrawal as ways to reconcile incongruent realities or counteract propaganda and oppression.
Satirical content in cartoons, memes, and parody videos is used as an overt tool to criticize political leaders and their policies. Russian political satire has a long tradition, dating back to the Soviet era, when humor was often used to expose the absurdities of the regime. Today, satirical content is often shared on social media platforms, where it can reach a wide audience.
Irony and sarcasm are used to subvert official narratives and highlight contradictions in government statements. For example, during the COVID-19 pandemic, Russian activists used irony to criticize the government’s response to the crisis, pointing out the discrepancies between official statistics and the reality on the ground.
Performative art is also used to draw attention to political issues. For example, the feminist punk rock group Pussy Riot gained international attention in 2012 when they staged a protest performance in Moscow’s Cathedral of Christ the Savior, criticizing the Orthodox Church’s support for Putin and his election campaign. The performance had absurdist, mocking, and satirical elements as five female group members entered the cathedral, put on colorful balaclavas and began jumping around and punching the air on the altar steps. Later they combined the video with the song, which they entitled “Punk Prayer: Mother of God Drive Putin Away.” The song invoked the name of the Virgin Mary, urging her to get rid of the then Russian Prime Minister Vladimir Putin. The song used crude analogies and obscenities to point to the connections between the Church and the government. The performance led to the arrest and prosecution of three members of Pussy Riot.
Under severe restrictions and threats of punishment and arrest, citizens in Russia developed humorous forms of protest as a way to express themselves and challenge authority while minimizing the danger. Thus, in 2011–2020, citizens organized “nanoprotests” or “toy protests” in various cities in Russia, a form of action where political participation of humans was replaced by toys (e.g., Lego toys, soft toys, or toys from Kinder eggs), and the toys were expressing their political views through slogans such as “I’m for fair elections!” and “This referendum is even more toy-like than ours” (Gogitidze, Reference Gogitidze2012). The terms and this form of action were an ironic reaction to the inability to organize “real” human protests and a reference to the ubiquitous claims that Russia is the country of advanced nanotechnologies (Nim, Reference Nim2016).
Another case of ridiculing the authorities was the words auction, an online event organized by the student magazine “DOXA” to support its journalists who were arrested and charged with the involvement of minors in protests. The auction invited subscribers of DOXA channels to donate money and send editors random words. Words that received the most donations were to be included in the speeches that arrested students would give in court (@doxajournal, 2021). The money was intended for paying fines and fees for the arrested journalists (and others). The goals of this act included emphasizing the randomness of the arrests and charges and exposing the absurdity of the court and its procedures (Тyshkevich et al., Reference Tyshkevich, Gutnikova, Metelkin and Aramyan2021; random words selected by online users are in bold):
My freedom of movement is limited. I can’t even go to Reutov – my friend honey badger lives there, and I miss him a lot. I cannot freely go out into the city in the evening, even though for me it is critically important, because I am a brewerosaurus.
[Ogranichena svoboda moego peredvizheniia. Ia ne mogu dazhe s”ezdit’ v Reutov – tam zhivet moi drug-medoed, ia sil’no skuchaiu po nemu. Ia ne mogu svobodno vyiti vecherom v gorod, a dli amenia eto kriticheski vazhno, ved’ ia pivozavr.]
The authorities understand the power of humor and try to stifle it using legal measures, surveillance, intimidation and harassment, and discreditation. Several Russian comedians were included in the list of foreign agents and had to leave the country (e.g., Galkin, Shatz). Satirical publications and websites have been shut down, and individuals have been fined or imprisoned for their satirical content. Increased surveillance and persecution also increased self-censorship, as individuals fear the consequences of sharing humorous or critical content. The Russian government has also employed disinformation and propaganda campaigns to undermine the credibility of activists and satirists. They use state-controlled media outlets to portray dissenters as unpatriotic, foreign agents, or as promoting harmful ideologies. By discrediting and marginalizing those who use humor as a tool of resistance, the authorities aim to undermine their influence and discourage others from following their lead.
Overall, despite the evidence of extensive use of humor and ridicule in the political context, it is not clear how effective humor is as a tool for resistance and political activism in Russia. It allows activists and citizens to express dissent in creative and subversive ways, but so far, its impact has not been shown to effectively challenge authority and promote change. In the context of repressive measures and violent crackdown, humor can be seen more as an individual coping mechanism rather than a tool of collective political action. Humor is an attempt to fight against the absurdity of Russian reality and counteract a sense of helplessness and defeat, information overload, and collective guilt.
Whether jokes use satire as a form of protest and resistance to harsh regimes or provide only relief rather than resistance may depend on circumstances dictating which is possible (Davies, Reference Davies2007).
Russian Humor after the 2022 Invasion of Ukraine
Even before the full-scale invasion of Ukraine in 2022, the Russian media waged an information war, ridiculing the other side, undermining the legitimacy of Ukraine as a state, and portraying Ukrainians as subhuman. The media introduced and amplified the use of ethnic slurs for Ukraine and Ukrainians, one of them being a reference to the dill plant because it bears a phonetic similarity to the Russian word for Ukrainians (Berdy, Reference Berdy2014). In addition to the state media, thousands of regular users and hired trolls have been involved in creating and disseminating pro-Kremlin humorous content on social media.
Political dissenters have also leveraged humor to counter the Kremlin’s offensive. In the wake of the invasion, wherein the Russian government enforced strict limitations on all expressions of dissent, the role of cyber humor has become increasingly pivotal. It now plays a central role in countering propaganda and disinformation, challenging the autocratic regime, trying to erode its grip on freedom of expression and human rights, and nurturing a sense of community among those who resist the government, offering them reassurance that they are not alone in their opposition efforts.
In this section, we illustrate patterns of humor use and dissemination by both Kremlin supporters and those who oppose its policies, its ideology, and its war. Additionally, we propose strategies for bolstering and amplifying the use of humor as a tool of resistance in Russia and, more broadly, as a means to foster democratic transformation.
Propaganda Humor
The Russian authorities never underestimated the effectiveness of humor in manipulating public opinion. After the invasion, the media sphere has become a battlefield where all means are fair, with war-fueled humor taking on a more acerbic character. The illegitimacy of Ukraine as a state is among the central themes being pushed through humorous content by pro-Kremlin actors. Prior to the invasion, Ukraine’s geopolitical borders were repeatedly called into question when countless images of a map of Ukraine flooded social media and captions urged Ukraine to follow through on its decommunization policyFootnote 2 and give up territories “gifted” to it by communist Russia. Such a map is shown in Figure 9.1. The caption under the map image of Ukraine mockingly inquires: “As part of decommunization, shouldn’t gifts be returned?” The regions of Ukraine shaded in different colors are marked as “Gifts from Russian tzars 1654–1917,” “A gift from Lenin, 1922,” “Gifts by Stalin, 1939, 1945,” “A gift by Khrushchev, 1954.” The small section in the middle colored in red is identified as “Ukraine within the borders of 1654.”

Figure 9.1 Post on VKontakte from Mozalevsky, D. (September 22, 2018).Footnote 1
This example is similar to many others, as it comes from a popular Russian humorous site, fishki.net, and has been shared multiple times. However, the author of this post adds his own rich framing context by mockingly responding to another user who has the “real, authentic, 25th generation purebred representative of the Ukrainian nation with a truly Ukrainian name ‘Vanya’.” This exaggeratedly ironic subject line already hints at the content of the appeal: Just as the name “Vanya” is not “truly Ukrainian,” but rather is often claimed to be the most authentic Russian name, so is Ukraine not a separate country, but rather a part of Russia. In a nearly 400-word appeal in this post, the author ridicules his opponent for being brainwashed and not understanding that Ukraine is the creation of “Bolshevik idiots” who arbitrarily “drew on the map with a pencil” a territory inhabited by “a bunch of different nationalities” and called it Ukraine. The author continues: “With the same success – from a hangover or, having smoked a little [weed], the Bolsheviks could unite, for example, the Pskov, Novgorod and Tver provinces – into one republic !!! ))))) And name it – ‘UKRAINE’!!!”
After the invasion, the tone of these messages became more ominous as the authors cynically sneered that Russia was simply implementing a decommunization program for Ukraine when it seized Ukrainian territories. Thus, in a tweet from February 2022, the user Lada provided the following retort to criticisms of Russia’s aggression: “I’m sorry, but Russia is acting strictly within the framework of decommunization” (Figure 9.2). The user shared a map similar to the one in Figure 9.1 with regions of Ukraine marked as “Gifts of the Russian Tsars 1654–1917,” “Lenin’s gift, 1922,” “Stalin’s gifts, 1939, 1945,” and “Khrushchev’s gift, 1954.”

Figure 9.2 Post on Twitter from Lada [@_LadyLidiya] (March 21, 2022).Footnote 3
Other themes of pro-Kremlin humor reinforce the idea of Ukraine as a failing state. The inadequacy of Ukrainian leaders and the military, the country’s servile admiration for the West, and grotesque Russophobia have become the most common topics with their own rhetoric and tropes. President Zelenskyy’s comedic past has provided a rich source for comparing Ukraine to a circus. Thus, a post on the Russian platform ok.ru (Figure 9.3) shared a picture of a traveling circus colored in yellow and blue (the colors of the Ukrainian flag) with the statement “All the world’s a stage, only Ukraine is a circus. Circus with clowns.”

Figure 9.3 Post on ok.ru from user “1-2nd platforms-admin dwelling. Donetsk. DNR. RF” (August 23, 2022).Footnote 4
The posts on social media mocked president Zelenskyy as a clown who is under the influence of drugs (Figure 9.4) or is thirsty for blood after the invasion (Figure 9.5). In the example from Figure 9.4, the subject line is a play on words: the word “agony” is made up of two words “fire” and “I,” which can be interpreted as if the author is burning with anger or as if Zelenskyy is on fire. Zelenskyy is portrayed as a clown high on cocaine. The text with the picture reads: “Zelenskyy signed a decree imposing sanctions against 606 representatives of the Russian authorities: 28 members of the Security Council, 154 members of the Federation Council and 424 deputies of the State Duma. Got sniffed again!”

Figure 9.4 Post on Twitter from user Dusia Niushina [@DNusina] (September 9, 2022).Footnote 5

Figure 9.5 Post on Twitter from user Saint Francesco [@saint_francesco] (August 31, 2022).Footnote 6
In the example from Figure 9.5, Zelenskyy is depicted in a clown wig with a red nose added to his face, but he has bloodstains on his suit. Behind him is a war scene with a firing tank and a destroyed building. The text reads: “The bloody clown, for the sake of victory, put more than 1,200 soldiers into fertilizer, but the victory did not happen. More than 3,000 tons of scrap metal is now scattered on the outskirts of the Kherson region.”
A clown, who is supposed to make people laugh and (mostly) is considered a benign figure, is juxtaposed against the horrors of war and made into a “bloody clown” who terrorizes and kills. Zelenskyy the comedian is not funny anymore, as he is being blamed for Ukrainian suffering.
The “expectation vs reality” framing is deployed by Russian users to ridicule the Ukrainian army. The pictures shared online often put together a positive military image, such as an army of disciplined soldiers or a victorious soldier, and a mocking negative image of impoverished destitutes or draft-dodging cowards (Figures 9.6 and 9.7).

Figure 9.6 “Ukrainian Army. Expectation vs reality.”

Figure 9.7 “Expectation vs reality.”
Ukraine’s relations with the West and its aspirations to become a member of the European Union (EU) and North Atlantic Treaty Organization (NATO) are regularly depicted in crude sexual clichés, as in the following joke: “Ukraine will never be a member of NATO or a member of the EU. It will only be a hole for all kinds of members.”
Even a cursory analysis shows that pro-Kremlin actors weaponize humor to create a certain representation of reality and justify the Kremlin’s actions. Crude and denigrating jokes depict Ukrainians as inferior, ungrateful people who betrayed their Russian brothers and sold out to the West. Easy-to-grasp, pro-Kremlin humor builds on familiar stereotypes that existed in Russia before and advances the Kremlin’s political agenda while hiding the atrocities committed by the Putin regime and the Russian army or justifying them as necessary to restrain Ukraine and its people.
Resistance Humor
At the outset of the invasion, cyber humorists waged a battle against Kremlin propaganda by exposing the true nature of the incursion. Despite authorities consistently labeling it as a “special operation” and prohibiting the use of war-related terminology to downplay the scale of the invasion, anti-war social media channels were inundated with memes humorously advocating for the renaming of Leo Tolstoy’s novel War and Peace to Special Operation and Peace. Similarly, a tweet that humorously questioned why Kremlin supporters opted for the Latin letters Z or V instead of the genuine Cyrillic “Ы” in reference to the special operation was recognized as the top ironic comment by a contributor on the Russian humor site fishki.net in 2022 (Figure 9.8). Without exploring the origins of Z and V as symbols of support for Russia’s war in Ukraine (see Dean, Reference Dean2022), it is important to note that the letter Ы is linked to the iconic Soviet comedy Operation Y and Shurik’s Other Adventures, which follows a naive student Shurik and his wins over bungling criminals. The parallel between Operation Z and Operation Y exemplified in the accompanying tweet mocks the special military operation as a failed criminal scheme: “I don’t fully understand why they took Z and V as symbols, two least cyrillic letters. Why not Ы?”

Figure 9.8 Post on Twitter from user Ladno, ya Archet [@sir_Archet] as cited on Fishki.net.Footnote 7
President Putin’s speech on February 24, which marked the beginning of the invasion, has faced ongoing ridicule through satirical memes, sarcastic comments, and pointed jokes (anecdote). The following joke involving Russia’s Defense Minister, Sergei Shoigu, and President Putin satirizes one of the central arguments of Putin’s speech, specifically the call for de-Nazification in Ukraine (“Volodya” is a short pet name for “Vladimir”):
• Sergei, why are we retreating from Kherson?
• Volodya, you yourself ordered the liberation of Ukraine from fascists and Nazis.
The joke flips President Putin’s argument, exposing the invasion’s illegitimate and aggressive character while drawing a parallel between Russian forces and the Nazis.
Another much-derided point in Putin’s speech was that Ukraine had committed genocide in the Donbas region while Russia had spent “eight long years” trying to resolve the conflict peacefully (Putin, Reference Putin2022). The widely shared meme features an animated ape passionately discussing “eight years” in colloquial incorrect spelling, so much so that it has now become part of meme-arsenal.com, a Russian meme-generating site. The irreverent comparison of the President to an ape highlights the argument’s absurdity, effectively discrediting it (Figure 9.9).

Figure 9.9 Post on Pikabu.ru from user victor545.Footnote 8
As Putin’s leadership has come to symbolize authoritarianism, the strategy of democratic forces opposing his rule has been to constantly mock the Russian president, aiming to undermine his authority. Even minor slipups provide ample material for satirists. For example, when Putin commented at the All-Russian Youth Environmental Forum “Ecosystem. Protected Land” that Russia was the true land of the rising sun, it triggered a wave of sarcastic tweets where Russia was humorously referred to as “the land of the rising sun and setting hopes” (Figure 9.10).

Figure 9.10 Post on Twitter from user Bezdrotova kolonka/Besprovodnaia kolonka [@b_currant_girl] (September 5, 2022).Footnote 9
Some playfully accused Japan of stealing the title from Russia, along with the notion of an eternal emperor. As the text on Figure 9.11 says in response to Putin’s comment, “Japan stole from Russia both the rising sun and the idea of the eternal emperor.”

Figure 9.11 Post on the Telegram channel “Pezduzalive” (September 5, 2022).Footnote 10
While personal attacks may provide entertainment and provoke strong social media reactions, opponents of Putin’s autocratic regime recognize the need to target the key pillars of his power that stifle democratic processes. These include institutions, propaganda and disinformation, and constraints on freedom of speech. These subjects regularly appear in reports on the popular opposition YouTube channel “Superpower News.” The channel’s founder, Maxim Maltsev, is known for his unique news reporting style, characterized by irreverent gallows humor and deadpan delivery. For example, in his video posted on November 2, 2023, and titled “Thank you to those who die for Putin – Simonyan mocks Russians” (Figure 9.12), Maltsev employs gallows humor to critique the position of Russian authorities and propaganda purveyors, such as Margarita Simonyan, the editor-in-chief of the state-controlled Russian broadcaster RT, often referred to as a “Kremlin mouthpiece” (e.g., see U.S. Department of State, 2022).

Figure 9.12 YouTube video “Thank you to those who die for Putin – Simonyan mocks Russians” on the channel Superpower News.Footnote 11
Simonyan justifies the war in Ukraine, claiming that everything is going according to plan and expresses gratitude to Russian citizens, including those who have given their lives for the war effort. Maltsev ironically questions the nature of the plan that requires the sacrifice of numerous Russian and Ukrainian lives. The video’s title provocatively reads, “Thank you for dying. The fallen, stay healthy.”
In another video (Maltsev, Reference Maltsev2023), Maxim criticizes new Russian high school history textbooks, which he argues follow the principles of propaganda and will turn a new generation of Russians into zombies ready to serve as cannon fodder in the Kremlin’s protracted war. He wryly remarks that parents must now worry about their children getting “A” grades in history classes; poor grades, according to Maxim, are no less troublesome, because students failing history courses may find themselves promptly dispatched to the front lines.
The theme of widespread deception is a recurring one, with blame extending beyond just the Russian president, propagandists, and elites. Those who don’t oppose the regime are also viewed as complicit in perpetuating lies. In April 2022, a widely popular meme was shared on the Telegram channel “Most na Zhepi,” targeting those who refuse to acknowledge the truth. The image features a cat that obviously has eaten a good chunk from a tray of cold cuts and yet the captions read “Not everything is so clear” and “We don’t know the whole truth,” referring to very common retorts against the statements that Russia invaded Ukraine and waged a brutal unfair war (Figure 9.13).

Figure 9.13 Post on Telegram from user Ivan B. (April 2, 2022).Footnote 12
Since its posting, this image has been widely shared and adapted across various social media platforms. In one follow-up, a post mockingly deployed propaganda tropes, claiming that if we get to the bottom of it, there are at least five plausible explanations: (1) the ham took a bite of itself; (2) the cat defended the ham from the owner by eating it; (3) picture is a fake and represents an information war between the cat and the ham; (4) we are not experts in ham and therefore it is difficult to say what the picture shows; and (5) they said on TV that it was not the cat (Figure 9.14).

Figure 9.14 Post on Twitter from user Helgi Oiisac [@HelgiOiisac] (April 5, 2022).Footnote 13
As the war continues, there is an increasing trend of humorous posts targeting ordinary Russian citizens (Arkhipova, Reference Arkhipova2023). This upsurge in self-deprecating humor represents a strategic shift in the efforts of prodemocratic forces. They aim to dissuade people from complying with the Kremlin’s orders by publicly shaming them for their complicity. An example of this self-deprecating humor is a joke about Russian soldiers supposedly fighting alongside the Nazis, subverting the commonly held patriotic trope of victory over fascism. This joke irreverently challenges Putin’s narrative of de-Nazification in Ukraine while simultaneously accusing ordinary Russians of supporting an unlawful war:
• Good afternoon! You are summoned to the military recruiting office.
• And with whom will we fight?
• With the Nazis, of course!
• And against whom?
Stalin-era self-deprecating jokes that emphasized the powerlessness of the common person have been recycled to reflect the new Russian realities. To cope with the fear of being rounded up, labeled “foreign agents,” or thrown in jail, people invoke the jokes from World War II:
• Tell me, what concentration camp are we being taken to?
• I don’t know, I’m not interested in politics.
The joke mocks the neutrality of those citizens who claim that they are not interested in politics to avoid discussions about collective responsibility and civic duties. The person who is not interested in politics ignores what is going on around them and eventually ends up in a concentration camp, one of the most horrendous places where humans are tortured and killed in large numbers. Lack of interest in politics can eventually lead to one’s own demise.
This self-deprecating humor serves multiple social functions, primarily offering humorists and those who share their content a feeling of cognitive independence in a political environment where such independence is unattainable (cf., Wisse, Reference Wisse2013). Although this diverse online community may not directly confront the regime, it unmistakably and audibly rejects the regime’s values and ideology, and it also mobilizes new followers.
The ongoing struggle for democratic change in Russia, involving both pro- and anti-war forces, is incomplete without considering the Russian émigré community, which has grown significantly as many Russians leave the country (van Brugen, Reference van Brugen2022). Unburdened by the fear of persecution for criticizing the regime, this expanding Russian diaspora openly mocks Russian authorities, including President Putin, underscoring the groundlessness of the official Russian narrative.
An example of this irreverent humor is the Propaganda Review series hosted on the YouTube Navalny Live channel. In this series, presenter Anton Pikuli (Ustimov) gives sarcastic reviews of Russian news, mocking the absurdity of Putin’s regime. In a video posted on March 26, 2022 (Figure 9.15), Anton Pikuli reports on the official rally “For a World Without Nazism” that took place in Moscow three weeks after the invasion to commemorate the eighth anniversary of the annexation of Crimea. More than 200,000 people attended the event, as Putin delivered a speech praising Russian troops fighting in Ukraine. Pikuli ridicules the event, commenting that “tens of thousands of civil servants” were forcibly rounded up by the authorities “to create the illusion of support for Putin.” These people “were under the gunpoint of snipers, who apparently protected Putin from excessive support from the Russians.” In his series, Pikuli creates an advantageous subjective position for his viewers, who, at least for a moment, are freed from their fear and frustration and can laugh at authority. The episode was viewed by 1.4 million people, which is the average for this series. The blogger has been declared a “foreign agent” by the Russian Federation Ministry of Justice.

Figure 9.15 Anton Pikuli (Ustimov) hosts the YouTube series Propaganda Review on the Navalny Live channel.Footnote 14
The irreverent irony culminates in the widely popular animated online series Masyanya, in which the female protagonist comes to Putin, showers elaborate profanity on the president for attacking Ukraine, and leaves him a Japanese sword so he can kill himself (Figure 9.16). The audience experiences cathartic relief when Masyanya returns safely to her apartment, where her boyfriend shares with her the happy news that the war is over. Over 6 million people watched the episode within a few weeks of its release, marveling at the possibility of Russia freeing itself from Putin’s tyranny and ending the war. The Russian media regulator Roskomnadzor banned the series, claiming that it “contains false information” and “discredits the Russian armed forces” (Current Time, 2022).

Figure 9.16 Masyanya, Episode 160 on YouTube. “Masyanya is trying to solve the war situation.”Footnote 15
When viewed through the prism of democratic processes, cyber humor serves multiple functions, ranging from attempts at eroding autocratic structures to facilitating the broader dissemination of information and fostering a digital community of individuals who might otherwise have experienced a sense of fragmentation and isolation – precisely the conditions totalitarian regimes seek to impose on their citizens (Desmet, Reference Desmet2022).
Discussion
Our analysis reveals that prodemocratic forces in Russia persist in their struggle against autocracy amid Russia’s ongoing war with Ukraine. Faced with increasingly strict censorship laws and the criminalization of dissent against the Russian authorities, this conflict between pro- and antidemocratic forces has shifted to the online sphere, with social media evolving into a battleground where humor is wielded as a potent weapon in a high-censorship environment, as its seemingly shallow surface reveals underlying truths (Fox, Reference Fox and Amarasingam2011).
Humor creates opportunities for political engagement in Russia. Both pro-Kremlin actors and their opponents use humor to construct social reality, connect with people on an emotional level, and influence the audience’s political behavior. Recognizing the power of humor, the Russian authorities are trying to maintain a monopoly on political humor through heavy censorship and legal restrictions. Despite significant obstacles, dissent is still audible on Russian social media, and the postwar Russian diaspora is leading the way, openly challenging Russian official discourse and propaganda.
The creators of both propaganda humor and resistance humor are equally adept at recycling and generating Soviet-style jokes (anekdots) originally intended for oral distribution, while also adopting the styles and formats of modern social networks, including images, videos, and multimedia. The two most popular media spaces for sharing humor are Twitter (now called “X”) and VKontakte.
The key differences between propaganda humor and resistance humor lie in their themes, targets, preferred types, and the subject positions afforded by humorous narratives. Propaganda humor is directed outward, dealing aggressive blows to the enemies of the state – Ukraine, its leaders, military, and people. Disaffiliating, aggressive humor helps the Russian regime divide society and dehumanize its opponents. The preferred types of propagandistic humor are mockery and parody, as these forms of humor provide a position of superiority for propagandists to cast judgment upon the targets of their humor.
We found two types of resistance humor in our data – inwardly directed and outwardly directed. Inwardly directed, self-deprecating humor makes up a large part of the political humor shared on Twitter and VKontakte. People laugh at themselves to cope with a harsh reality when their lives are uprooted, their sons are drafted into the army, and they find themselves in a police state worse than the USSR. Self-deprecating humor draws on a long tradition of Soviet oral jokes, repurposing them for new media; this humor provides temporary relief and creates a sense of camaraderie between people. This humor also shames people for their passivity and complicity with the regime, thereby encouraging the audience to oppose the regime, even if not overtly.
The combative outward humor has become a staple of the political humor of the Russian diaspora, actively challenging and subverting government pro-war narratives and propaganda. This humor attacks Putin, Russian authorities and institutions, mobilizing people and giving them hope.
Conclusion and Future Steps
Many reports detail Russia’s use of algorithms, bots, trolls, and other technical means to manipulate global narratives, disrupt democratic processes, meddle in elections, distort information sharing, and normalize cyberbullying and disinformation. Less documented is the ongoing struggle by democratic forces within Russia, the human impact of information warfare, or the sociocultural aspects of competing narratives surrounding the war, including nuanced patterns of Russian discussions about the war across multimodal social media, in particular the use of humor in those competing narratives.
Understanding communication tactics and the interplay between actors, their messages, and behaviors within Russian-speaking social media spaces is crucial. Such understanding can help refine strategies and increase precision in countering Russian (mis)information power, ultimately amplifying dissenting voices in Russia and advancing the democratic agenda. The interplay of censorship, language dependencies, and historical preferences has created a unique information ecosystem in Russia. Knowledge of communication patterns among Russian-speaking users, including the use of humor, will guide the development of counternarratives to combat Russian propaganda. As Ukraine President Volodymyr Zelenskyy, himself a former comedian, said in a 2022 interview, Russian President Vladimir Putin is afraid of humor because comedy is a “powerful weapon” for spreading truth.
Further understanding of the role of humor in defeating misinformation and advancing the goals of democracy will contribute not only to creating models that assess the expected outcomes of online propaganda and resistance techniques in Russian-speaking social media spaces, but, more broadly, will also expand understanding of how various information ecosystems shape narratives. As a clandestine instrument, humor can be particularly effective in resisting propaganda and promoting democratic processes in a high-censorship environment of authoritarian, restrictive nondemocratic systems as well as in emergent democracies, including former Soviet states such as Georgia, where many Ukrainian refugees and Russian draft evaders fled after the invasion. Although there has been strong public support in Georgia for joining the EU and NATO, there is growing pessimism there about democratic backsliding and Russian influence has increased there since the invasion of Ukraine (Fix & Kapp, Reference Fix and Kapp2023). Russian-speaking citizens in Georgia and other emerging democracies in that region are exposed to Russian propaganda and may also be reading Russian-language posts resisting official narratives. Humor in those messages may support Russian-speaking citizens in emerging democracies in the region by showing solidarity, thus fostering global cyber communities against authoritarianism. This line of research may also suggest different ways in which humor can be used in various misinformation contexts in American and other democratic societies’ low-censorship social media environments.
Finally, while humor has the potential to bolster resistance to propaganda and thus promote cyber peace and the resilience of democratic institutions, it should be noted, as demonstrated here, that it can also be used for nefarious purposes as a tool to manipulate opinion in support of pro-war and antidemocratic propaganda, particularly as it operates on an emotional rather than a rational level.
Introduction
The Central Asian region comprises a complex historical, political, social, and cultural landscape. The five countries – Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan – are located at the nexus of Russia, China, South Asia, and the Middle East. Having been part of the Soviet Union, Central Asian republics entered the Commonwealth of Independent States after the bloc’s disintegration in 1991. Turkmenistan reduced its involvement to an associate membership in 2005 and the country maintained a status of permanent neutrality (Mite, Reference Mite2005). Kazakhstan, Kyrgyzstan, Tajikistan, and Uzbekistan have participated in the security and economic Russia-led organizations. The Collective Security Treaty Organization (CSTO) currently comprises Kazakhstan, Kyrgyzstan, and Tajikistan. Uzbekistan suspended its membership in 2012 (RFE/RL, 2012). The Eurasian Economic Union (EAEU) has Kazakhstan and Kyrgyzstan as full members and Uzbekistan gained observer status to the EAEU in 2020 (Aidarkhanova, Reference Aidarkhanova2023). The region’s traditionally strong relations with Russia have seen competing interests and prominently a shift towards closer cooperation with China and diversification of international partnerships. Economic ties and trade with China have intensified with the Belt and Road Initiative, alongside security cooperation, including in the Shanghai Cooperation Organization (SCO). Kazakhstan, Kyrgyzstan, Tajikistan, and Uzbekistan are among CSO members, together with China, Russia, and more recently India, Pakistan, Iran and Belarus (Kassenova, Reference Kassenova2022). The US deepened its engagement in Central Asia as a response to the war in Afghanistan, and the region has been dealing with the fallout from the Taliban takeover (Putz, Reference Putz2023; Temnycky, Reference Temnycky2023). The US commitment has transitioned from primarily security-focused interests to a broader diplomatic and economic strategy, such as the C5+1 diplomatic platform serving as a multilateral dialogue initiative (U.S. Department of State, 2023).
Central Asia is at a critical demographic juncture with a combined population of more than 78 million people, which is young and growing. Young people under thirty years old constitute more than 50 percent of the population, which is set to form a strong labor force in a growingly digitalized society (UNODC, 2023). The strategically positioned region has unequally distributed oil and gas reserves, Kazakhstan and Turkmenistan possess the majority of the hydrocarbon resources, and deposits of critical minerals that attract international cooperation. The armed conflict in Ukraine reinforced the diversification of economic and security alliances and catalyzed reconsiderations about Russia’s role in the region. While the regional dependencies remain strong, Moscow is viewed with increased suspicion (Freeman, Helf, & McFarland, Reference Freeman, Helf and McFarland2023; Hess, Reference Hess2023). The Central Asian leaders have distanced their rhetoric from Russia concerning the war against Ukraine. The countries declined to recognize the annexation of Ukrainian territories, officially complied with sanctions against Russia, and declared support for Ukraine’s territorial integrity (Freedom House, 2023b).
After more than three decades since gaining independence, efforts toward democratic reforms in Central Asia have shown limited progress. Regional approaches to the rule of law, democratic institutions and processes, and the protection of human rights and fundamental freedoms remain distorted. The Central Asian republics move on the spectrum from authoritarian-leaning countries to consolidated autocracies. According to Freedom House’s “Freedom in the World” (2023b) report, all five states are considered “not free.” Kyrgyzstan is regarded as the only democracy in the region, but the country has gradually shifted towards authoritarianism. On the other side of the democratic spectrum is Turkmenistan, a repressive authoritarian state where political rights and civil liberties are largely denied. Governing structures reflect in online spaces. Assessing internet freedom status, Freedom House (2023c) categorizes Kyrgyzstan as “partly free,” Kazakhstan and Uzbekistan as “not free,” and no data is available for Tajikistan or Turkmenistan.
Despite the negative outlook, some democratic processes have become more pluralistic, mainly because countries prioritize economic modernization, development of digital infrastructure and services, and attracting foreign investments. However, elections as critical markers of democratic progress remain uncontested. The incumbent administrations use various forms of suppression, including digital repression, to ensure that opposition candidates and parties do not gain viable support or political traction. Prospects for a grassroots change are negligible, which translates into the voting results. The key election questions remain the size of the regime-chosen candidates winning the vote and the size of the turnout. For illustration, in the November 2022 presidential election in Kazakhstan, incumbent President Kassym-Jomart Tokayev won with over 81.3 percent of the votes, with turnout reported at 69.4 percent (OSCE & ODIHR, 2022b), while in Uzbekistan’s presidential election in July 2023, incumbent President Mirziyoyev won with an 87.1 percent share and voter turnout of 79.8 percent (OSCE & ODIHR, 2023). Turkmen presidential elections in March 2022 saw the office passed from father to son when Serdar Berdimuhamedow was elected with 73 percent of the votes and a 97.2 percent voter turnout (Vershinin, Reference Vershinin2022).
Amid the efforts to exert control over election results, the digital environment has emerged as a vital operational domain. In this respect, the Central Asian republics are part of a broader trend present in authoritatively leaning countries. Information autocracies, as observed by Sergei Guriev and Daniel Treisman (Reference Guriev and Treisman2019), use technology to control the online information space. These regimes display a track record of restrictive policies, especially around elections and referendums, leveraging the state control of the internet infrastructure and the influence governments exercise over digital service providers operating in their jurisdictions. While tendencies toward digital authoritarianism are present in all five countries, Central Asia is not a monolithic space. States display several national specifics depending on the available tools and resources, sociopolitical contexts, and strategic objectives. Kazakhstan, Kyrgyzstan, and Uzbekistan enjoy comparably open online space in contrast to Tajikistan, where online information and communication are heavily restricted and subject to government control, and Turkmenistan, which has pursued a stringent and centralized approach to internet control through a combination of offline and online measures that maintain a highly sterilized online environment (Muhamedov & Buralkiyeva, Reference Muhamedov and Buralkiyeva2023).
Electoral processes worldwide are increasingly vulnerable to cyber-enabled interference originating from abroad. Perpetrators harness the power of social media and messaging platforms to disseminate fake news and disinformation, conduct disruptive attacks on governmental services, electoral and related critical infrastructure, breach and leak data, penetrate candidates’ online accounts or conduct espionage to gain strategic advantage (ASPI, 2019; Van der Staak & Wolf, Reference Van der Staak and Wolf2019). Such interference can impact elections both directly and indirectly, by eroding trust in democratic processes and institutions over the long term. The 2016 US presidential election stress-tested the influence of online intrusion, particularly when carried out by threat actors affiliated or sponsored by adversarial states (United States Senate, 2017). Reports on foreign interference have been growing ever since with malicious actors expanding tactics, techniques, and targets. For example, in a statement from December 2023, the British government condemned Russia’s attempts to target high-profile individuals and entities through cyber operations with the intent to use hacked information to interfere in the national democratic processes. The data breach was attributed to a group within Russia’s intelligence services, and reportedly involved several years of spear-phishing campaigns prominently targeting parliamentarians, as well as universities, think tanks, journalists, public sector, and civil society organizations (UK Government, 2023).
The second section examines the domestic and foreign cyber-enabled election interference in the region in further detail. National case studies are analyzed and positioned against the backdrop of global trends and regional practices. The presented examples are not exhaustive and serve as a representative snapshot. The third section considers potential incentives for governments to secure electoral infrastructure and wider democratic processes, including strengthening respective digital ecosystems. The final section offers policy recommendations to protect electoral integrity, enhance cyber resilience, and bolster accountability, which extend to states, international and regional bodies, the private sector, and civil society.
Cyber Threats to Election Integrity
Network Interference
The Central Asian republics control, monitor, and manipulate online information around elections and other politically charged events. As early as February 2005, the Citizen Lab (Deibert, Reference Deibert2005) reported that Kyrgyz websites belonging to political parties and independent media were subject to hacking during the parliamentary election. The pattern of recorded failures pointed to deliberate interference with access to online sources (OpenNet Initiative, 2005). Influenced by the Russian and Chinese models of information control (Weber, Reference Weber2019), governments in the region employ a plethora of restrictive measures to silence political dissent or discontent and censor the media and civil society voices. Several connected tactics are used to restrict access to online information, including blocking content and throttling or shutting down networks (Muhamedov & Buralkiyeva, Reference Muhamedov and Buralkiyeva2023).
Internet shutdowns are an extreme form of digital censorship. Authorities in Turkmenistan have been particularly prone to imposing large-scale blackouts. The country is a repeated offender using censorship when the electorate votes and keeping tight control over the online information space as a preventive measure. Recent incidents were reported by Access Now (2023b) around protests in December 2021 and during the March 2022 presidential elections when the citizens were plunged into digital darkness. Kazakhstan has also repeatedly switched off the internet amid elections and protests. The most severe internet shutdowns to this date were imposed as a reaction to the massive protests in January 2022 when Kazakhstani authorities initiated a country-wide blackout to curb the unrest. The government also disrupted internet access around extraordinary presidential elections in November 2022, accompanied by targeted blocking of social media and communications platforms (Access Now, 2023a). From a legal perspective, governments justify internet shutdowns with provisions on anti-terrorism and public security under their domestic law. On the technical level, close to complete shutdowns are enabled by centralized state control and influence over large segments of telecommunication infrastructure providing internet services (Pavlova, Reference Pavlova2022).
Selectively blocking access to online content is a widespread practice across Central Asia. According to Freedom House (2023a), Turkmenistan exerts the tightest control over the online information space, where the state-run internet service provider permanently blocks websites that carry independent news coverage or critical content. A key election in Kazakhstan in June 2019, when the country voted for a successor to then-president Nursultan Nazarbayev, was also accompanied by excessive network disruption on election day and restrictions on streaming services and several social media platforms (NetBlocks, 2019). The authorities justified the network interference with concerns over the spread of false information (Freedom House, 2020). During the January 2021 parliamentary election, media freedoms were curtailed with social media restrictions and distributed denial of service (DDoS) attacks designed to disrupt or block websites. According to the country’s officials, the targeted censorship intended to prevent the spread of false information and maintain public order (Freedom House, 2022). Kazakhstan repeated similar tactics in the extraordinary presidential elections in November 2022 (Access Now, 2023a; RSF, 2022) and again in the parliamentary election in March 2023 (RFE/RL, 2023). The region’s practice of suppressing free speech extends into censorship in online spaces. For illustration, Uzbekistan amended its criminal code to make insults against the president illegal and added penalties for online offences. These provisions, infringing on freedom of expression, entered into force ahead of the presidential elections in October 2021 (Freedom House, 2021).
DDoS attacks disrupt the traffic of a server, service, or network by overwhelming the target or its surrounding infrastructure. These disruptive attacks are simple to deploy and difficult to prevent, especially for small and under-resourced entities, which makes them a popular tool to censor media outlets and websites of civil society organizations and political candidates (Korosteleva, Reference Korosteleva2022). For example, Kazakhstani news site Vlast reported suffering a persistent targeted DDoS attack in January 2021 (Freedom House, 2022). DDoS attacks were also targeted against several news sites, including KazTAG and Orda.kz prior to the presidential elections in November 2022 (Justice for Journalists, 2024). Disruptive cyber operations have been playing an increasingly political role. In September 2022, Kazakhstan experienced a series of primarily DDoS attacks that interfered with internet connectivity across the country, allegedly linked to the upcoming presidential elections scheduled for November 2022. The intent behind these incidents was unknown, but President Kassym-Jomart Tokayev blamed foreign actors seeking to destabilize the country during a sensitive political period (Eurasianet, 2022; Khassenkhanova, Reference Khassenkhanova2022). Considering the regional context, the cyberattacks might have been connected to the war in Ukraine. According to the CyberPeace Institute’s database “Cyber Attacks in Times of Conflict Platform #Ukraine,” government websites in Kazakhstan were targeted by Russia-affiliated groups deploying DDoS attacks on three different occasions in October 2022.
It is noteworthy that Russia employed DDoS attacks as a coercive measure around politically charged events in the past, including state-orchestrated DDoS attacks on Estonia during the bilateral conflict throughout April and May 2007 and in the Russo-Georgian War in August 2008 (Kozłowski, Reference Kozłowski2020). Another early example was the use of DDoS attacks as part of Russian mounting pressure to persuade the Kyrgyz president to close the US base in Manas (Bradbury, Reference Bradbury2009). According to the Electoral Knowledge Project, in January 2009, DDoS attacks targeted the country’s internet infrastructure, lasting for approximately ten days and eliminating 80 percent of the country’s limited online capacity.
Cyberattacks Targeting Electoral Infrastructure
Malicious cyber activities against electoral infrastructure, including electronic voting systems and voter registration databases, with the intent to obstruct voting or erode trust in election results have been observed worldwide (ASPI, 2019). Central Asian countries have experimented with e-voting systems. Kazakhstan actively used e-voting technology between 2004 and 2011 but discontinued this practice and returned to paper ballots due to concerns over the integrity of e-voting procedures (Kassen, Reference Kassen2020). Uzbekistan piloted biometric identification during the referendum on constitutional amendments in April 2023 (Gazeta.uz, 2023). The Kyrgyz authorities view the digitalization of electoral infrastructure as a step toward enhancing transparency and increasing electoral participation through biometrics registration, biometric identification of voters, and ballot scanning and reporting technology. The e-system prevents common election fraud in the form of vote buying, carousel voting, and group or family voting (Sheranova, Reference Sheranova2020). This technology was piloted in the 2015 parliamentary election to increase the integrity of voting stations (Goyal, Reference Goyal2017; Putz, Reference Putz2015). In 2016, the Osh City Council introduced e-voting after repeated violent political uprisings in the region. Kyrgyzstan has conducted e-counts since 2017, where optical scanners are used to verify counts (International IDEA, 2023). The United Nations Development Programme (UNDP) (2021) reported a large-scale voters’ biometric data collection campaign prior to local council elections in March 2021 to increase the reliability of the vote count during elections and referendums. In April 2023, the Central Election Commission of Kyrgyzstan presented a pilot electronic voting project and an upgraded voter identification device for local elections (Kharizov, Reference Kharizov2023).
Public concerns about the integrity and reliability of electoral infrastructure, whether based on suspected external interference or internal manipulation, can have serious repercussions. Such disruption erodes public trust, impacts voter turnout, and challenges the legitimacy of election outcomes. The Kyrgyz parliamentary elections in November 2021 were accompanied by allegations of interference, and opposition leaders called for the results to be annulled after technical difficulties appeared to have affected the vote count (OSCE & ODIHR, 2022a; RFE/RL, 2021). The Central Election Commission’s tabulation monitor experienced a blackout, and when the system restarted, different results appeared on the monitor. The discrepancy resulted in several opposition parties falling below the 5 percent threshold required for entering the Parliament (Pannier, Reference Pannier2021). Technical failures can be destabilizing especially in countries with contested election outcomes. The 2021 Kyrgyz elections in question followed a failed parliamentary vote in October 2020. In that case, the Central Election Commission declared the results of voting in all polling stations invalid amid protests over alleged campaign violations and unfair voting practices. Concerning potential foreign interference in the 2021 repeated elections, the Kyrgyz Security Council registered cyber incidents targeting the Central Election Commission’s servers from twenty countries without affecting the elections (Kabar, 2021b; Orlova, Reference Orlova2021). The Commission reported on the attempted cyberattacks against national servers while stating that the current level of protection sufficiently defends the country’s electoral infrastructure (Kabar, 2021a).
Data Leaks and Cyber Espionage
Central Asia has witnessed a shift toward digitalizing government services driven by digital economy initiatives and international support. However, as pointed out in the Electoral Knowledge Project (n.d.), rapid digitalization can increase the attack surface for malicious activities if coupled with massive data collection and insufficient safeguards that would ensure privacy, transparency, and effective oversight to handle citizens’ personal information securely. Sensitive data can be hacked, leaked, or otherwise exploited for the purpose of fraud, jeopardizing people’s security, or undermining trust in democratic processes and institutions. For illustration, the Kazakhstani Central Election Commission experienced an extensive data breach in July 2019. Data from the election database, comprising information on 11 million Kazakhstanis, which constitutes approximately 60 percent of the total population, was published online, exposing names, identification numbers, passport numbers, and residence addresses (Vlast, 2019). The Ministry of Internal Affairs launched an investigation into the unlawful dissemination of confidential data in the same month. However, the investigation did not find anyone accountable for the incident. The official report cited insufficient evidence of a crime as the leaked personal data was neither further used nor caused harm. Still, the mere publication of personal data suggests malicious intent and poses risks to citizens, including the potential for identity theft, targeted scams, and social engineering to exploit the targets. The decision taken by the Kazakhstani Ministry of Internal Affairs to close the investigation highlights the lack of accountability for data breaches – even when national security and the democratic process are at stake (U.S. Bureau of Democracy, Human Rights, and Labour, 2020).
Data exfiltration and cyber espionage can also constitute election interference. Primary targets of such operations are government agencies, officials, and political candidates, but hacking extends to research and civil society organizations, journalists, and private companies (O’Connor et al., Reference O’Connor, Hanson, Currey and Beattie2020). Although the known cases of targeted hacking do not conclusively prove an objective to interfere in democratic processes, threat intelligence reports increasingly point to political motivation of state-affiliated groups operating in Central Asia. Microsoft’s reporting (2022) suggests that China utilizes cyber espionage as a tool for advancing the country’s political, economic and military influence, including in Kazakhstan. A Chinese cyber-security vendor further reported on Russian-speaking actors deploying malware to surveil a wide range of individuals and organizations in Kazakhstan, extending to government agencies, military personnel, researchers, journalists, private companies, educational organizations, and dissidents (Cimpanu, Reference Cimpanu2019). Securelist by Kaspersky (2018c) reported on a Chinese threat actor detected behind a targeted campaign linked to a high-level meeting in Central Asia pursuing a regional political agenda and another campaign targeting a national data center in the region that allowed access to a wide range of government sources (Securelist, 2018a). Another report by Securelist (2019) detailed Zebrocy malware, primarily associated with Russian state-sponsored hacking groups, spearphishing Central Asian government-related targets, both in-country and in remote locations. A Russia-affiliated group was also reported to use the potential Telegram ban in Kazakhstan to distribute malware in an alternative communication software for the political opposition (Securelist, 2018b). In yet another case, a hacking group deployed a novel backdoor against international governmental targets located in Kazakhstan, with low confidence indicators pointing to Moscow (Chakravarti, Reference Chakravarti2023). While not directly interfering with the elections, these incidents point to an active engagement of Russia- and China-affiliated actors in politically motivated hacking in the region, targeting critical components of democratic processes.
Disinformation Campaigns, Inauthentic Accounts, and Synthetic Content
Disinformation deployed to deliberately share false narratives and information with the aim of manipulating opinion and discrediting the opposition intensifies around elections. Online disinformation campaigns target a wide array of political candidates, media organizations, individual journalists, and civil society representatives. Malicious actors leverage social media and messaging platforms to sway voters, amplify the content via inauthentic accounts, and deploy trolls to harass individuals and distort online discourse (Lee Myers, Reference Lee Myers2022). In Central Asia, the coordinated online attacks provide the basis for a high incidence of online and offline abuse aimed at deterring candidates and independent reporting (Pavlova, Reference Pavlova2023). Networks of fake accounts promoting pro-government narratives have been observed in Kazakhstan around elections, including in the presidential election campaign in November 2022 (Du Boulay, Reference Du Boulay2023). These accounts are commonly referred to as “nurbots,” after the ruling Nur Otan Party (Kozhanova, Reference Kozhanova2019). Kyrgyz activists reported on fake accounts spreading disinformation, allegedly coordinated by national ministries and operated by a combination of government agencies and individuals (Bradshaw, Bailey, & Howard, Reference Bradshaw, Bailey and Howard2021). The Committee to Protect Journalists (2020) and Reporters Without Borders (2020) observed that law enforcement agencies in Kyrgyzstan instructed a troll farm to discredit critics. The practice of using fake accounts has also been observed in Uzbekistan, where online trolls derail discussions and undermine the reputation of journalists and news organizations (Bradshaw, Bailey, & Howard, Reference Bradshaw, Bailey and Howard2021).
As documented by the Australian Strategic Policy Institute (ASPI) (2019), Russia frequently deploys coordinated information operations abroad. The government and state-affiliated groups exploit the online information space to sway public narratives. These efforts persist beyond elections, forming part of a broader, long-term strategy to manipulate public opinion and erode trust in democratic institutions. Networks of accounts originating in Russia and targeting Central Asian states are regularly detected on popular social media platforms (Bradshaw, Bailey, & Howard, Reference Bradshaw, Bailey and Howard2021). Online disinformation campaigns have intensified in the aftermath of the international armed conflict in Ukraine (Cabar, 2023). Disinformation finds a fertile ground amid the expanding social media landscape and low trust in traditional media. The commercialization of advanced artificial intelligence tools is set to facilitate further abuse and empower malicious actors to spread fake and harmful content (Khashimov, Reference Khashimov2021; Neudert, Reference Neudert2018; Stanley-Becker & Nix, Reference Stanley-Becker and Nix2023; Tiku, Reference Tiku2022).
Online Surveillance and Intimidation
Politically motivated surveillance is pervasive throughout Central Asia, with governments employing sophisticated technologies to monitor citizens’ communications and online activities. Invasive data collection practices are facilitated by the adoption of Russian-style legislation and tools, increasing use of Chinese surveillance and censorship technologies, and adaptation of domestic legal frameworks to authorize state overreach (Weber, Reference Weber2019). The Kazakhstani authorities intercept internet traffic and internet users are required to install digital security certificates designed to generate detailed logs of their online activity (Kumenov, Reference Kumenov2020; Raman, Reference Raman, Evdokimov, Wurstrow, Halderman and Ensafi2019). Human Rights Watch (2021) reported that the “false information” bill in Kyrgyzstan, which came into power in August 2021, compels internet service providers to register their clients in a unified identification system and provides authorities with complete information related to users if a court or a state agency requests such data. Without adequate transparency and independent oversight, these intrusive practices are weaponized against the opposition, the media, and civil society, violating individuals’ rights to privacy and participation in civil society as well as freedom of expression, assembly, and association.
Both Kazakhstan and Uzbekistan have repeatedly engaged in the targeted surveillance of media, dissidents and activists (Kumenov, Reference Kumenov2018; Marczak et al., Reference Marczak, Scott-Railton, Perry, Al-Kizawi, Anstis and Panday2023; RFE/RL, 2016; Securelist, 2023). Already in 2015, WikiLeaks published an exchange of documents linking state agencies and a surveillance software company indicating that the government might have obtained software to monitor and interfere with online traffic, including encrypted communications, as well as to perform targeted attacks against certain users and devices (Freedom House, 2020, 2023b). In 2021, Amnesty International’s Security Lab conducted forensic analysis on the phones of Kazakhstani human rights activists – confirming four individuals had their devices infected with the Pegasus spyware (Amnesty International, 2021). A Justice for Journalists (2020) report highlights how online surveillance goes hand in hand with arbitrary arrests of opposition figures and journalists, creating a climate of self-censorship and preventing genuine political participation.
Incentives for Improvement
States have obligations under international law to uphold and protect the right to vote and to ensure that citizens can exercise their fundamental right without interference, intimidation, or coercion. Free and fair elections are protected under the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights. Complementary international and regional obligations, such as the Organization for Security and Cooperation in Europe (OSCE) commitments, further mandate Central Asian states to respect and protect freedom of expression, association, and assembly online, and the right to freely seek and impart information (Human Rights Council, 2022; Democracy Reporting International, 2012). In principle, governments should secure their legislative frameworks and enforcement mechanisms against abuse and exploitation of the online information space for political ends, guarantee the independence of digital infrastructure and services, and refrain from repressive practices such as internet shutdowns or throttling, censoring online content, or surveillance that violate the principles of necessity and proportionality and the right to seek, receive, and impart information. However, in practice, Central Asian countries justify restrictive practices as an intrinsic prerogative derived from sovereign state authority, appealing to the need to safeguard national security and maintain constitutional and public order (Muhamedov & Buralkiyeva, Reference Muhamedov and Buralkiyeva2023).
While incentives are missing for the regimes to ease their digital authoritarian practices, the economic ramifications of interference with internet connectivity and service restrictions are significant and could potentially influence countries’ behavior (Lamensch, Reference Lamensch2021; Rakhmetov & Valeriano, Reference Rakhmetov and Valeriano2022). Internet shutdowns and near-complete blackouts disrupt access to financial and public services and, in their consequences, harm populations as well as the private and public sectors (Muhamedov & Buralkiyeva, Reference Muhamedov and Buralkiyeva2023). Network interference inflicts reputational damage, discourages foreign investments, erodes citizens’ trust in digital services, and hinders digital transition (Woodhams & Migliano, Reference Woodhams and Migliano2023). Some states have shown an interest in minimizing the risks associated with shutting down the internet. However, this comes alongside exerting control over the online information space through more sophisticated approaches to allow monitoring of online space and circumscribing access to information. The Kazakhstani government has acknowledged that the nationwide internet shutdowns during the January 2022 unrest were an excessively harsh response, however, the authorities did not denounce the practice of internet shutdowns. The Ministry of Digital Development informed about a more refined approach, proposing the creation of a list of resources that would remain accessible during such crises (Zakon.kz, 2023). These were expected to be pro-government media. Turkmenistan proposed developing an autonomous national network. In December 2022, the Ministry of Foreign Affairs announced plans to develop an autonomous national digital network that would disconnect the country from the global internet (Chronicle of Turkmenistan, 2022). The project aims to create a separate digital infrastructure for tighter censorship, potentially resembling China’s Great Firewall.
Central Asian governments collect extensive data on their citizens without implementing adequate data security measures. This raises significant cybersecurity and human rights concerns. The countries have laws intended to protect personal and biometric data, and work on improving the data protection standards. However, substantial loopholes allow government access to sensitive information under the guise of national security or public order. By compromising data integrity, as well as users’ anonymity and privacy, Central Asian countries increase their potential attack surface and risk exposing domestic digital ecosystems to foreign interference. Both Russia and China are active powers in the region with a track record of hacking elections for political influence. These threats only intensify with the growing geopolitical polarization.
Central Asian countries largely fail to increase cyber resilience and preparedness. The Global Cybersecurity Index (GCI) by the International Telecommunication Union (ITU) measures the commitment of 182 countries to cybersecurity at a global level. In the 2024 Index, Kazakhstan and Uzbekistan are ranked as advancing. Kyrgyzstan, Tajikistan, and Turkmenistan remain in low tiers. The ITU figures show that Central Asia has been digitalizing without adequate legal frameworks, cybersecurity measures, and the necessary capacity in place. Nation-state actors and organized cybercriminal groups can exploit these vulnerabilities, especially around elections, referendums, and protests, and position to exert political or financial pressure on governments. Election interference benefits the perpetrators, while regional responses remain inadequate to deter malicious behavior and ensure meaningful accountability. With the growing use of cyber operations and disinformation campaigns as a political tool and large-scale participation of non-traditional and non-state actors in a domain that has traditionally seen an exclusive engagement of states, foreign threats to the region’s elections and fragile democratic processes are set to increase. In response to the worsening threat landscape, the Central Asian republics have more incentives to double down on extended cooperation and alignment with international partners. Although optimism for revitalizing democracy remains limited in the short to medium term, certain aspects of cyber policy and cybersecurity offer promising opportunities for multistakeholder collaboration.
Recommendations
Reduce the Risk of Information Compromise
• Central Asian governments must curb the mass collection of personal data to what can reasonably be justified, following the principles of legality, necessity, and proportionality, while increasing the cybersecurity, transparency, and oversight measures for how data is processed, transmitted, and stored to secure the systems that are custodians of sensitive and personal data. Excessive data collection creates weaknesses in the digital ecosystem that can be exploited for malicious purposes, such as unauthorized access to personal data, resulting in severe privacy violations with little accountability.
• Broad surveillance powers violate international human rights standards, undermine data security, and induce self-censorship. States must refrain from intrusive practices such as mass surveillance, real-time collection of traffic data, and targeted interception of online communications without legal authorization and a legitimate purpose, adhering to the principles of necessity and proportionality in line with international human rights standards.
• Internet service providers and online platforms must help secure digital ecosystems, especially in regard to transparency about their operations as well as trust and safety practices and potential threats to users’ privacy. Digital service providers must ensure privacy by design, including end-to-end encryption, data protection safeguards, and fact-checking mechanisms that include trusted and verified partners in the loop of accuracy verification and limit the virality of disinformation and other harmful content. Priority should be given to improving verification techniques to combat manipulated content and inauthentic coordinated behavior online, especially amid the commercialization and rapid proliferation of AI tools for language generation, image creation, and content amplification.
• Procurement and deployment of dual-use technologies such as commercial cyber intrusion capabilities by states should be conditional and subject to a human rights and impact assessment that informs and guides such activities and their scope, aligns them with the international human rights law, and subjects them to independent and transparent oversight. Governments should commit to imposing international controls to prevent commercial cyber intrusion tools from being exported to countries with poor human rights records and prevent vendors from bypassing regulations.
Improve the Understanding of the Cyber Threat Landscape
• Countering foreign cyber-enabled election interference requires effective technical and legal attribution to identify threat actors and address malicious behavior. States should invest in building their national capacities and extend international and bilateral cooperation aimed at information exchange and forensic and legal capacity building. Proactive measures such as conducting regular cybersecurity assessments and simulations can help identify vulnerabilities within electoral systems.
• Central Asian governments should advance the implementation of confidence-building measures on structured and transparent exchange of information between states and private companies. International partnerships, such as the International Counter Ransomware Initiative, and operational collaboration between the public and private sectors facilitate gathering timely, comprehensive, and actional intelligence and allow states and affected entities to protect themselves against attacks. For example, the OSCE confidence-building measures include threat information sharing among participating states, public–private partnerships, critical infrastructure protection, and vulnerability disclosure (OSCE, 2016). Common baselines for sharing intelligence and recognizing the impacts of cyber incidents are key for effective mitigation and response, and OSCE delivers programs in Central Asia that support the development and implementation of the classification of cyber incidents in terms of scale and seriousness, emphasizing critical infrastructure protection (Cybil Portal, n.d. ).
• Donor countries and organizations should prioritize funding for civil society initiatives, research and investigative organizations, and independent media providing transparent data collection, investigation, and analysis. Such efforts help build data-driven understandings of the threat landscape, including the knowledge of cyber threats affecting elections and democratic processes. Information about cyber-enabled election interference is currently reliant on the fragmented work of organizations operating with limited resources. To evidence cyber threats and their impacts on infrastructure, services, and populations, authorities and research organizations should conduct rigorous data collection and pilot data-driven methodologies to enhance cyber preparedness, improve accountability, and inform effective victim redress.
Strengthen Accountability and Legal Frameworks
• Central Asian governments must increase transparency around elections by allowing independent election observation and reporting. Electoral observation missions should advance their reporting on cyber incidents, information campaigns, online intimidation, and other cyber-enabled tactics threatening free and fair elections. By incorporating detailed assessments of cyber threats into their reports, electoral observation missions can provide critical insights into how cyber-enabled tactics undermine electoral integrity.
• States should advance normative and legal frameworks to strengthen international provisions safeguarding electoral infrastructure, namely under the United Nations framework of responsible state behavior in cyberspace. In the consensus report of the United Nations Open-Ended Working Group on information and communications technologies (ICTs) (United Nations General Assembly, 2021), states acknowledged that malicious cyber activities against critical infrastructure and critical information infrastructure that undermine trust and confidence in political and electoral processes, public institutions, or that impact the general availability or integrity of the internet, are a real and growing concern, and that public–private cooperation may be necessary to protect its integrity, functioning, and availability. However, progress on these commitments remains uneven and stakeholder engagement inadequate. States should advance cyber norm implementation and clarify the application of international law, including international human rights law, in cyberspace through targeted capacity building to strengthen accountability measures for state-sponsored interference in elections and democratic processes.
• Governments should develop policy and legal frameworks through inclusive consultative processes. Multistakeholder platforms such as the Oxford Process on International Law in Cyberspace provide expert guidance and support for states. “The Oxford Statement on International Law Protections against Foreign Electoral Interference through Digital Means” (The Oxford Process, 2020) reaffirms that the body of existing international law applies to cyber operations by states, including those that have adverse consequences for the electoral processes of other states. Other multistakeholder initiatives, such as the Paris Call for Trust and Security in Cyberspace (2018), have called on stakeholders to strengthen their capacity to prevent malign interference by foreign actors aimed at undermining electoral processes through the malicious use of cyber capabilities.
• State response to cyberattacks must be timely and transparent, extending to informing citizens, reporting on detected cyber threats, and following up with accountability measures to deter malicious behavior. Detailing which norms or laws have been violated by a cyber incident can enhance the transparency of attributions and build state capacity to recognize and penalize malicious behavior. For instance, the EU Cyber Diplomacy Toolbox includes a cyber sanctions regime that addresses persons and entities involved in cyberattacks or attempted cyberattacks with a significant effect (EEAS) (Cyber Diplomacy Toolbox, n.d.).
Increase Cyber Resilience of Electoral Infrastructure and Processes
• Cybersecurity measures must be prioritized already at the inception phase of building or upgrading electronic election systems and digitizing election administration. State agencies relevant to election processes must act transparently and ensure election results are verifiable to be accepted by the electorate. Such measures hold particular significance for the Central Asian region, where several countries have experimented with e-voting systems. For instance, Kazakhstan initially embraced e-voting technology but later abandoned the system due to concerns about the reliability of e-voting procedures, or Kyrgyzstan, where technical difficulties appeared to have affected the vote count.
• Central Asian authorities should increase and incentivize cyber capacity building, including sharing best practices and promoting dialogue with stakeholders. Many existing initiatives facilitated by states or intergovernmental organizations are open only to government representatives. Capacity building needs to be more inclusive to build trust and leverage the strengths of diverse groups of stakeholders. Similarly, international and regional platforms dealing with cybersecurity and cybercrime must enable and support multistakeholder cooperation.
• Civil society is critical in fostering transparency and accountability during the election process. To support these efforts, funding from international organizations, donor countries, and private companies should be directed toward organizations with a track record of capacity-building initiatives that can raise cybersecurity awareness, provide training programs, and help combat disinformation, such as those promoting independent fact-checking or supporting investigative journalistic initiatives.
• Securing free and fair elections, including by refraining from network interference and information campaigns that influence election results, must be an integral part of multilateral and bilateral agendas that democratic countries pursue in their engagement with Central Asian governments. Cybersecurity cooperation and capacity building presents a viable platform to enhance the region’s overall resilience against internal and foreign cyber threats to elections (European Commission, 2023).
Conclusion
Central Asian governments tightly control their online information space. Elections in the region, albeit in varying degrees, are marked by a combination of internet shutdowns, network restrictions, deployment of inauthentic accounts to amplify pro-government narratives, extensive mass and targeted surveillance, and intimidation. Digital authoritarianism takes place amid intensifying disruptive and stabilizing cyberattacks, data extortion, cyber espionage, and foreign disinformation campaigns. The multiple and connected cyber threats require a coordinated response to enhance election integrity and resilience of the digital ecosystem. Governments must curb domestic censorship and surveillance practices to enable free and fair elections while mitigating the risk of cyber-enabled foreign interference. State-affiliated actors will continue to target electoral processes to plant distrust and project power as long as rewards outweigh the risks. The cyber domain presents promising opportunities for international and multistakeholder collaboration to reduce the risk of information compromisation, improve understanding of the cyber threat landscape, strengthen accountability and legal frameworks, and increase cyber resilience of electoral infrastructure and processes – steps that are increasingly urgent in light of the growing geopolitical polarization.
The word “democracy” literally means “rule by the people” (Dahl & Shapiro, Reference Dahl and Shapiro2024, para. 1). Democracy is underpinned by the right and power of people in a jurisdiction to shape their collective destiny. This is enshrined in international law. Article 21 of the Universal Declaration of Human Rights (1948) defines the authority of government as stemming from the will of the people who elected it, the right of the people to participate in their government, as well as the right of the people “of equal access to public service” in their country.Footnote 1 Article 1 of the International Covenant on Civil and Political Rights (1976) similarly enshrines the right of all to self-determination and thus their right to “freely pursue their economic, social and cultural development.”Footnote 2 The United Nations generally advocates for democracy as a system of governance where the “freely expressed will of people is exercised,” and which enables greater security and human development (United Nations, Reference Trautman and Ormerod2019, para. 4). Therefore, democratic rights comprise both specific human rights such as freedom of expression and peaceful assembly and association (United Nations, Reference Trautman and Ormerod2019, para. 5) and, more generally, to enable the people of a jurisdiction to determine how they are governed and how they pursue the development of their society and economy.
Economic development – the process followed by an economy in becoming more advanced, “especially when both economic and social conditions are improved” (“Economic development,” 2023) – and the promotion of human rights, including democratic rights, share a symbiotic relationship. For example, Articles 23 and 25(1) of the Universal Declaration of Human Rights (1948) protect one’s entitlement to what are arguably the fruits of economic development, including the right to an appropriate standard of living, food, the best of health and certain employment protections (Feldman et al., Reference Feldman, Hadjimichael, Lanahan and Kemeny2016, p. 7). Sen (Reference Sen2001, p. 36, emphasis in original) argues that conferral of human rights (the “expansion of freedom”) functions “as both … the primary end and … the principal means of development.” In underpinning economic growth, economic development is among “the most reliable means for advancing human rights,” which include democratic rights (Feldman et al., Reference Feldman, Hadjimichael, Lanahan and Kemeny2016, pp. 6–7; Marslev & Sano, Reference Marslev and Sano2016, p. 15). For instance, economic development enables richer participation in societies by citizens, given that they feel more secure in their ability to express their political views if they are confident about their financial and food security, and having a home to go back to. Therefore, economic development (and subsequent economic growth) can bolster the trust and confidence of citizens of democracies in democracy itself as a means of delivering tangible benefits for them; and thus, the capacity of democratic institutions to promote and preserve their human rights through their roles in the making and/or adjudication of law and/or policy concerning economic development.
As enablers of economic development, and thus growth by providing essential services for citizens to live and flourish, critical national infrastructure (CNI) assets play a vital role in the promotion of citizens’ human rights, including their democratic rights. Indeed, citizens’ ability to do anything in modern societies depends on the availability of, for instance, electricity, telecommunications, and water treatment that are provided by CNI assets (Parliamentary Joint Committee on Intelligence and Security, 2022). Their protection is a “sovereign necessity,” given the relevance of the assets’ functioning to these societies’ national security (Parliamentary Joint Committee on Intelligence and Security, 2022, p. 6). Indeed, the very definition of CNI assets under national laws invokes their key role in the preservation of social or economic stability, national defense, and security (see, e.g., Security of Critical Infrastructure Act 2018 (Cth) s 51(c); Critical Infrastructures Protection Act of 2001 (2001) 42 USC § 5195(e)).
A species of CNI is digital public infrastructure (DPI). Given that this chapter is focused on the Indian approach to DPI, it will define DPI as per the definition agreed upon by the G20 under India’s presidency of the multilateral grouping, that is, a “set of shared digital systems, built and leveraged by both the public and private sectors, based on secure and resilient infrastructure, and can be built on open standards and specifications,” and open-source software (OSS), as well as an enabler of “delivery of services at societal-scale” (G20 Leaders, 2023, para. 55). The systems used to operate DPI (that this chapter refers to as “DPI Backbones”) are CNI assets that are delivering digitally native essential services, promoting the human rights of users of those services and thus, if regulated by democratic systems of government, helping maintain the citizenry’s confidence in democracy itself and the resilience of their democratic institutions. One should also note, for instance, the G20 countries’ call in 2023 for DPI to be “human-centric, development-oriented” and be governed in a way that “respect[s] human rights and fundamental freedoms” (G20 Digital Economy Ministers, 2023, paras. 6, 7, Annexure 1, para. 6.j). These calls reflect the role of DPI as an enabler of the preservation and promotion of human rights, including democratic rights, of citizens reliant on the services delivered through DPI.
In this vein, any threat to the operational resilience, including the cyber resilience, of CNI assets, such as DPI Backbones, is a threat to the human rights of the population they serve, including their democratic rights (see, e.g., Newbill, Reference Newbill2019, pp. 766, 771). Indeed, such a threat is also to their confidence and trust in the ability of democratic systems of government to regulate the functioning of CNI assets and thus promote their human rights. The United States acknowledges this in referring to cybersecurity as “essential to … the operation of our critical infrastructure, [and] the strength of our democracy and democratic institutions” (The White House, 2023, p. i). Similarly, the G7 Digital and Tech Ministers (2023, para. 15) defined “secure and resilient digital infrastructure” as “a key foundation for … an open and democratic society.”
One should also note that two of the norms of responsible state conduct in cyberspace, approved by the United Nations General Assembly (UNGA) by consensus, called on states to appropriately protect their CNI assets, which would include DPI Backbones, from cyber-enabled threats, and – in ensuring they respect specific UN Human Rights Council resolutions on human rights in the digital age – to “guarantee full respect for human rights, including the right to freedom of expression,” a democratic right (United Nations, 2021, pp. 8, 11, 13). This is complemented by how CNI assets themselves enable human rights promotion, as mentioned, and the implementation of international law and norms by which elected governments would give their citizens confidence in democracy as a mode of government itself. Therefore, work by governments to tackle cyber-enabled threats to CNI assets and thus boost their cyber resilience would promote and operationalize the UNGA-approved cyber norms, one of the themes of this book which adopts a wide, holistic approach to exploring how to defend democracy.
The conceptual relationships outlined in the preceding paragraphs are summarized in Figure 11.1.

Figure 11.1 The relationships between critical national infrastructure (CNI; including digital public infrastructure (DPI)); economic development and growth; human rights; citizens’ trust and confidence in democratic institutions and the resilience of those institutions; and United Nations General Assembly (UNGA)-approved norms for responsible state conduct in cyberspace (composed by the author)
This chapter focuses on India’s DPI, the platforms and systems that deliver a range of digitally native essential services to the citizens of the world’s largest democracy (Guterres, Reference Guterres2022, para. 10). As it highlights, Indian DPI has enabled an extraordinary level of economic development and growth. The cyber resilience of DPI Backbones is vital to Indians’ trust in the democratic institutions that define, run, and/or oversee this infrastructure (such as the elected federal government) as bodies meant to promote their welfare and thus their human rights. This feeds directly into the resilience of Indians’ faith in democracy as a concept. Any work by the Indian government to ensure the cyber resilience of DPI Backbones would also be implementing the aforementioned UNGA-approved norms for responsible state conduct in cyberspace.
In exploring Indian DPI, this chapter also argues that the bedrock of India’s DPI, the actual open Application Programming Interfaces (APIs) and protocols called the “India Stack,” make the latter fundamental to India’s resilience as an increasingly digitally enabled democracy (iSPIRT, 2021). The chapter hones in on how the software dependencies of the India Stack and DPI Backbones invite systemic cyber risks for India as a democracy. In particular, it calls for India to prosecute systemic cyber risks that its DPI, built with the India Stack, faces via critical software which runs on India’s DPI Backbones. This is justified with reference to the sheer complexity of (critical) software supply chains, the greater intent and capability of malicious cyber actors to target software supply chains, and the (recent) history of CNI assets more generally being attacked via vulnerabilities in, or misconfigurations of, critical software. There is also a recommendation that India closely scrutinize cyber risks invited by the use of OSS in building DPI, especially since the India Stack itself uses OSS (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8). The chapter closes by exploring how India can take the India Stack, as well as its approach to building and deploying DPI more generally, to the world, particularly democracies in the Global South. It calls on India to, as part of sharing its open standards-based approach to DPI, help its democratic Global South partners deploy DPI in a cyber-resilient manner to make it harder for malicious actors to disrupt the provision of essential services and thus undermine their citizens’ faith in democratic systems of government as guarantors of economic development and human rights, as well as threaten efforts by these Global South partners to implement the aforementioned UNGA-approved cyber norms, per Figure 11.1.
This chapter defines “critical software” as software, the execution of which, broadly speaking, can seriously impact the cyber resilience of a computer running it, drawing inspiration from the National Institute of Standards and Technology (NIST)’s (2021) definition.
What Are Digital Public Goods and Digital Public Infrastructure?
Before exploring the India Stack, two foundational concepts must be understood: digital public goods (DPGs) and DPI. DPGs are “types of open-source software, models and standards that countries can use to operationalize their [DPI]” (OECD, 2021, p. 257). DPI comprises “platforms such as identification (ID), payment and data exchange systems that help countries deliver vital services to their people” (OECD, 2021, p. 257). In this manner, DPGs are the building blocks of DPI and thus the essential services that the latter provide in societies and economies. DPGs and DPI are ever more crucial in light of the move of social and economic activity online since the pandemic (OECD, 2022, p. 5). DPGs thus democratize the ability to “participate fully in social, financial, and political life,” given that said ability is technologically dependent (Behrends et al., Reference Behrends, Simons, Troy and Gupta2021, p. 3). DPGs are, like the DPI they underpin, fundamental to democratic resilience in the societies where they are provided, given the relationships between DPI, CNI, economic development and growth, human rights, as well as the trust and confidence in democratic institutions and democracy as a concept (per Figure 11.1).
DPGs are defined by their open format, adaptable for different populations and contexts (OECD, 2021, p. 257). The underlying code and standards can be audited by the (government) organizations deploying DPGs in DPI, enabling the identification and management of vulnerabilities in the code and other potential shortcomings of the relevant DPGs before they are deployed widely across a population (OECD, 2021, p. 260). This transparency enables more effective and inclusive consultation of relevant stakeholders by (government) organizations deploying DPI, helping drive a better, more inclusive deployment that respects democratic values and helps inspire confidence in the ability of democratic systems of government to deliver economic development via DPGs and DPI.
Indeed, the role of DPGs as enablers of the flourishing of societies and democracies is well recognized. The United Nations Secretary-General (2020, pp. 6–7) pointed to their critical role “in unlocking the full potential of digital technologies and data to attain the Sustainable Development Goals, in particular for low- and middle-income countries.” In addition to the technical infrastructure on which DPGs are deployed, what are essential to their deployment are common standards that enable open access to datasets that themselves become available as DPGs, as well as “robust human rights and governance frameworks to enhance trust in technology and data use, while ensuring inclusion” (United Nations Secretary-General, 2020, p. 7). These factor into efforts to ensure that DPI itself is developed and deployed in a manner that respects human rights, per the calls of the G20 (G20 Digital Economy Ministers, 2023, paras. 6, 7, Annexure 1, para. 6.j). Therefore, the safely calibrated deployment of DPGs as part of DPI is vital to trust and confidence in democratic institutions running and/or overseeing that deployment and the resilience of the democracies where that deployment occurs.
The nature of DPGs can also be understood by contrasting their reflection of an open approach to the digital delivery of essential services to citizens with the closed approach of private companies. Historically, private companies have often provided these digitally delivered services because of their having the capacity to build and deploy the necessary infrastructure at (population) scale and in an economical manner (Burt, Reference Burt2018, as cited in OECD, 2021, p. 257; ID4D, 2020, as cited in OECD, 2021, p. 257). Since they own the intellectual property underlying these services and infrastructure – and set the terms and conditions of their use – these companies have substantial influence, if not control, over whether and how these services are delivered, including over their benefits to citizens (Burt, Reference Burt2018, as cited in OECD, 2021, p. 257; ID4D, 2020, as cited in OECD, 2021, p. 257). These companies are powerful gatekeepers and, given the centrality of these services to the ability of citizens to meaningfully participate in their societies and economies, have great influence over the conduct of affairs in a democracy (Behrends et al., Reference Behrends, Simons, Troy and Gupta2021, p. 5). In this manner, the digital delivery of essential services by private companies threatens the digital sovereignty of democracies, that is, their governments’ “power and authority … to make free decisions affecting citizens and businesses within the digital domain” (Gawen et al., Reference Gawen, Hirschfeld, Kenny, Stewart and Middleton2021, as cited in OECD, 2021, p. 257). These companies threaten the ability of democratically elected governments to act in the interests of the people who elect them, the very essence of democracy, including by working toward their economic development by achieving the Sustainable Development Goals (OECD, 2021, p. 257). Conversely, this underlines the criticality of DPGs as pillars of the collective ability of the people of a democracy, via the governments that they elect, to act in their interests; and thus, their trust in democracy as a concept.
As with DPGs, DPI operates under an open paradigm governed by frameworks built on transparency and participatory governance (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8). DPI comprises the “networks, systems or platforms that allow programmatic and secure access to the underlying data and business logic [of DPGs] through [APIs]” (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8). DPI is “public” if its operator provides equal access to the relevant services to all users, and if its underlying standards and platforms are publicly available, making the subsequent enjoyment of these services by users non-rivalrous (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8). This is combined with the positive externalities for populations where the DPI is deployed in an open, interoperable manner, which also points to the nature of DPI as providing public goods (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, pp. 8–9). It is the interoperability of DPI (such as payments services with identity verification services) that enables these externalities to be “foundational and cross-cutting” at the public policy and operational levels (Desai et al., Reference Desai, Markskell, Marin and Varghese2023, paras. 6–10). Indeed, DPI reaches across sectoral and policy siloes “to create population scale digital ecosystems that promote inclusive development,” including the acceleration of progress toward achieving the 2030 Agenda for Sustainable Development and Sustainable Development Goals (European Union & Government of India, 2023, p. 2; see also G20 Digital Economy Ministers, 2023, para. 7). Therefore, just like the DPGs that underpin DPI, it is the “foundational infrastructure” for a democracy and goes to the ability of its people to choose their own destiny (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8). DPI enables the efficient, convenient, and transparent delivery of a range of essential services and provides all stakeholders with a platform to innovate on top of the DPI itself, facilitating further economic development (Alonso et al, Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 12). In this vein, the cyber and operational resilience of DPI Backbones are critical to the resilience of the democracy where DPI is deployed. Much like how protecting CNI assets has been termed a “sovereign necessity” in light of the criticality of the essential services these assets provide to the national security and social stability of the jurisdiction where these assets are deployed (Parliamentary Joint Committee on Intelligence and Security, 2022, p. 6), so is protecting DPI Backbones from a range of threats, especially in the cyber domain, vital to the resilience of the democracy where DPI is deployed.
In the same vein, India’s DPI, like the India Stack that India’s DPI is built with, is the “foundational infrastructure” for Indian democracy, given that the operational resilience of its DPI Backbones enables Indians to participate in their society and economy in the manner in which they collectively see fit (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8). India’s DPI and DPI Backbones go to India’s digital sovereignty, the ability of the government elected by the Indian people to make decisions affecting their participation in the burgeoning Indian digital economy, rather than that of (overseas) malicious cyber actors looking to undermine their confidence in their democratic system of government (Gajbhiye et al., Reference Gajbhiye, Arora, Arham, Yangdol and Thakur2022; Gawen et al., Reference Gawen, Hirschfeld, Kenny, Stewart and Middleton2021, as cited in OECD, 2021, p. 257). As per Figure 11.1, if the Indian government takes steps to ensure the cyber resilience of Indian DPI Backbones (CNI assets), it implements the UNGA-approved cyber norms, promotes economic development and growth, Indians’ human rights, including their democratic rights, and thus their trust that democracy truly delivers for them.
What Are the India Stack and India’s DPI?
The India Stack can be defined as follows:
India Stack is the moniker for a set of open APIs [Application Programming Interfaces] and digital public goods that aim to unlock the economic primitives of identity, data, and payments at population scale … [T]his project was conceptualized and first implemented in India, where its rapid adoption by billions of individuals and businesses has helped promote financial and social inclusion and positioned the country for the Internet Age.
Therefore, the India Stack comprises the DPGs that enable the provision of digitally native essential services by India’s DPI, the platforms that are built with the India Stack. The India Stack has three layers (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, pp. 3, 10, 20, 22; D’Silva et al., Reference D’Silva, Filková, Packer and Tiwari2019, p. 8):
identity, which includes Aadhaar (a “12-digital unique identification number that is linked to biometric [identifiers]”), eKYC (Electronic Know Your Customer, that is, identity authentication using Aadhaar), eSign (legally binding digital signatures that are linked with the signatories’ respective Aadhaar numbers), GSTN (Goods and Services Tax Network, that is, 15-digit identifiers for businesses registered under the federal goods and services tax regime) and Udyam (a framework for Indian Micro, Small, and Medium Enterprises);
payments, which include the Unified Payments Interface (‘UPI’, fast payments rails), Aadhaar-Enabled Payment System (enabling transitions between bank accounts authenticated by the parties’ Aadhaar identities), Aadhaar Payment Bridge (digitally transferring government benefits and subsidies to the Aadhaar-linked bank accounts of payees) and Bharat Bill Payment System (a platform through which citizens can pay utility, telephone, and other types of bills); and
data, which includes DigiLocker (a credential and document management platform for Aadhaar holders) and Account Aggregator (a consent-based data portability framework for Indians’ financial information that is handled by federally regulated financial institutions).
This is an extensive catalog of services that are delivered by Indian public sector entities via India’s DPI, enabled by the India Stack. The scope of these services speaks to the importance of the India Stack and thus India’s DPI to the ability of hundreds of millions of Indians to participate fully in social and economic activities. Some key examples of these services and platforms are Aadhaar (digital identity), the UPI (digital payments), the Account Aggregator Framework (open banking), and DigiLocker (management of government-issued documents and credentials) (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, pp. 10, 20, 22).
One should also note the additional delivery mechanisms for services that India’s DPI comprises, including the following:
Open Network for Digital Commerce (ONDC), a set of open protocols which seeks to democratize e-commerce and replace a “platform-centric” model with a “transaction-centric” one where buyers and sellers can find each other across e-commerce applications that are hosted on the ONDC network and built using those protocols (Gupta, Reference Gupta2022, paras. 28–29; Open Network for Digital Commerce, 2022, pp. 10, 13);
Direct Benefit Transfer (DBT) mechanism for transferring funds and other benefits under 312 Indian social welfare programs run by fifty-three government departments directly to the bank accounts of beneficiaries (over half of which are linked with their Aadhaar numbers), and which is part of India’s Public Financial Management System (PFMS), itself integrated with over 600 banks for verification of bank accounts and with the National Payments Corporation of India (NCPI) for verification of Aadhaar-linked bank accounts (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8; Government of India, 2023a; Ministry of Finance, 2023, p. 212). The DBT is fundamentally enabled by the “JAM Trinity,” that is: the opening of over 530 million bank accounts (as of late August 2024) for formerly unbanked Indians under the “Pradhan Mantri Jan Dhan Yojana” program; Aadhaar, the means by which the identity of most DBT beneficiaries is verified; and the sheer penetration of mobile telephony in India, namely, over 960 million phones that enable phone and/or online banking (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, pp. 15–16; Department of Financial Services, 2023; Ministry of Finance, 2023, pp. 30, 156; MyGovIndia, 2023; Modi, 2024);
COVID Vaccine Intelligence Network (CoWIN), which was developed as an extension of India’s electronic Vaccine Intelligence Network and is a cloud-based solution that ran India’s COVID-19 vaccination program (Ministry of Finance, 2023, p. 198). This served all stakeholders, be it vaccine recipients (booking appointments and receiving vaccination certificates), vaccinators, or logistics providers (such as through real-time visibility into the vaccine supply chain at the national, state, and even district levels) (Ministry of Finance, 2023, p. 198).
To understand the sheer scale of India’s DPI, the following statistics and developments are instructive:
India’s success with the JAM Trinity in advancing financial inclusion is such that, had the country not used DPI solutions, it was estimated that India would have taken forty-seven years to open bank accounts for over 80 percent of Indian adults; much longer than the nine years taken to lift it to that level from 25 percent in 2008 (World Bank Group, 2023, p. 3, citing D’Silva et al., Reference D’Silva, Filková, Packer and Tiwari2019, p. 4).
In the case of the UPI:
○ In 2022, India processed the most real-time digital payments in the world, the number growing over 76 percent year-on-year in 2021–2022 and the country representing over 46 percent of all the world’s real-time digital payments in 2022 (ACI Worldwide, 2023, pp. 5–6). This has been complemented by the shrinking of the informal economy in India, estimated in 2021 to represent 15–20 percent of India’s gross domestic product (GDP), down from 52 percent in the fiscal year 2018 (Ghosh, Reference Ghosh2021, pp. 1–2).
○ The number of UPI transactions grew in fiscal year 2023–2024 by a record 57 percent to a record 131 billion (Jacob, Reference Jacob2024, paras. 1–3).
○ As a signal of the value of the UPI as a digital payments paradigm, Google even presented the UPI to the U.S. Treasury Department as an example of how to deploy an open standards-based payments network (Wadhwa, Amla, & Salkever, Reference Wadhwa, Amla and Salkever2022, paras. 2, 9).
As of November 2022, there had been over 1.35 billion unique Aadhaar numbers generated; 8.6 billion identity authentications done with Aadhaar; over 1.3 billion eKYC checks done; 754 million bank accounts linked with Aadhaar; and over 1.549 billion transactions done through payment systems simply reliant on counterparties’ Aadhaar numbers to identify them (Ministry of Finance, 2023, pp. 155–156).
Between 2013 and January 2023, the DBT was used to transfer over 3.2 trillion dollars’ worth of social security benefits directly to the bank accounts of hundreds of millions of Indians (Ministry of Finance, 2023. pp. 212–213). This was alongside the PFMS transferring COVID relief payments to around 500 million Indians during the pandemic (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 18). Separately, a major benefit of the DBT, enabled by welfare recipients’ bank accounts being linked to citizens’ Aadhaar numbers, is the substantial reduction in monies being paid to phantom beneficiaries. The Indian government has estimated that it has saved over Rs 2.73 trillion between 2015 and 2022 by plugging these leakages (Choudhary & Singh, Reference Choudhary and Singh2023, para. 3).
As at December 31, 2022, twenty-three Indian banks had joined the Account Aggregator framework, enabling Indians, holding 1.1 billion bank accounts collectively, to share their banking data with eligible financial institutions through this open banking framework. Already, 3.3 million Indians have linked their accounts to the framework and 3.28 million of them have shared their data under it (Ministry of Finance, 2023, pp. 308–309). In June 2023, citizens had cumulatively raised 13.46 million consents (representing a monthly growth of over a quarter) while 2.9 million new consents were fulfilled in the same month (World Bank Group, 2023, p. 12).
The ONDC platform, which went live in September 2022, has expanded to pilot programs in over 180 cities and saw over 5,000 orders daily in the retail category of goods and services (mostly food and beverages) as at April 2023 (Choudhury, Reference Choudhury2023, para. 5). This constituted a twenty-five-fold growth in the period March–April 2023 (Choudhury, Reference Choudhury2023, para. 1).
CoWIN was the backbone of India’s COVID-19 vaccination program, the world’s largest, and critical to the distribution, administration, and tracking of over 2.2 billion doses of vaccines given to 1.04 billion Indians (847 million of whom were identified by their Aadhaar numbers) between January 2021 and September 2022 (Ministry of Finance, 2023, p. 198; World Health Organization, 2021).
These factors suggest the criticality of DPI, built with the India Stack, to the ability of the citizens of India, the world’s largest democracy, to participate fully in their society and economy (Guterres, Reference Guterres2022, para. 10). This is reinforced by the sheer quantum of citizens served by India’s DPI and the vitality of the services it provides – from digital identity to payments, data portability, social welfare, and vaccination – to their collective ability to live their lives as they see fit. Given that several of these services are operated by the public sector, India’s DPI is vital to the ability of Indian citizens to shape their collective destiny via the government they elect – the very essence of democracy – and have faith in that system of government as a means of driving their economic development, growth, and promotion of their human rights, including their democratic rights, as flagged in the opening paragraphs of this chapter and in Figure 11.1.
The importance of India’s DPI and the India Stack to India’s ability to drive the economic development of its people, and thus their confidence in its democratic institutions as enablers of this development (given in Figure 11.1), is also underlined by the praise these have received from senior officials in governments and international organizations. The President of the UNGA referred to India as “leading in the field of digital public infrastructure” with the world having “much to learn from [India]” (Ministry of External Affairs, 2022a). The Secretary-General of the United Nations labeled India’s DPI as “world-class” and implementing a “whole-of-society approach to development [which] combines old-fashioned community outreach with cutting-edge technology” (Observer Research Foundation, 2022). The German Ambassador to New Delhi referred to India’s journey as a growing digital economy, including through the digital delivery of services, as an example for Germany to learn from (ANI, 2022c). The United States’ then-Deputy National Security Advisor for Cyber and Emerging Technology praised Aadhaar as an enabler of “critical services” for Indian citizens – “often to sets of a population who were illiterate so that they could get their rations, they could get their access” – and for its “thoughtful” approach to preserving the privacy of citizens through data minimization (Inglis & Neuberger, Reference Inglis and Neuberger2021, pp. 16–17). The Deputy Director of the International Monetary Fund’s Fiscal Affairs Department, termed the DBT as a “logistical marvel, seeking to help people at low‑income levels, reaching hundreds of millions of people” (ANI, 2022b).
Industry and civil society actors have also praised India’s DPI, enabled by the India Stack, as an example for the world. The Chief Technology Officer and a Senior Vice-President of PayPal at the time, Sri Shivananda, spoke of the uniqueness of India’s achievement:
There is no technical stack in the world with a country’s name as a prefix. What the Indian government and regulators have done together with the common national identity through a digital system and a common national API for payments, is nothing but brilliant. India is one of the first countries that has a platform first approach, with the platform being secure, robust and reliable. It is a role model for many other countries to follow.
Philanthropist and technology luminary, Bill Gates, was similarly effusive in his remarks:
No country has built [a] more comprehensive platform than India … Because of the pioneering investment including creating the basic Aadhaar identity, India was in the lead in getting out (relief) payments … during the pandemic … We would like to see all countries, particularly developing countries, adopt these things.
These words of praise for India’s DPI, built atop the India Stack, underline the criticality of both to the ability of India’s people to independently run their affairs through the government they elected to further their digital sovereignty. It also reinforces the nature of India’s DPI as a guarantor of their economic development and thus, ultimately, their trust and confidence in the democratic system of government to advance their interests through tools such as DPI and the India Stack, given in Figure 11.1. Therefore, (praise for) the substantial track record of DPI stresses its importance as a pillar of the resilience of India’s increasingly digital democracy (explored earlier).
This makes India’s prosecution of systemic cyber risks that its DPI, especially its DPI Backbones, face via critical software all the more important, which the chapter now explores.
Prosecuting Systemic Cyber Risks from Critical Software
Given that India’s DPI, like the India Stack itself, is digitally native, the security of the software that is deployed on DPI Backbones is paramount for its operational resilience. The weaponization of vulnerabilities in that software by malicious cyber actors – be they criminals, hacktivists, or state(-affiliated) actors – ultimately weaponizes the “digital dependence” of over a billion Indians on these CNI assets to verify their identity, as well as make and receive digital payments, among other things (OECD Council, 2022, Preamble para. 6). These are systemic cyber risks for India as the world’s largest democracy (Guterres, Reference Guterres2022, para. 10).
Governments generally recognize the need to prosecute such risks. Under India’s presidency, the G20 Leaders (2023, para. 57) recognized the increasing importance of a “secure digital economy” for all stakeholders. The G20 Digital Economy Ministers (2023) repeatedly used the word “secure” to describe the digital economy and technologies, including DPI, that their governments seek to foster. The Ministers “reaffirm[ed] the importance of security in the digital economy as a key enabling factor” (G20 Digital Economy Ministers, 2023, para. 5). Given the vitality of secure digital technologies to how citizens of democracies conduct their lives, as flagged above, software security goes directly to how they collectively shape their destinies and, flowing from that, their confidence in democratic institutions as protectors of their democratic rights such as freedom, assembly, expression, privacy, and association in the increasingly digital societies they inhabit.
Given that most of the services provided by India’s DPI are provided by the public sector (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 10; D’Silva et al., Reference D’Silva, Filková, Packer and Tiwari2019, p. 8), the weaponization of vulnerabilities affecting the underlying DPI Backbones has the very real potential to undermine Indians’ confidence in their democratically elected government to protect their collective ability to shape their own destiny, and thus their confidence in democracy itself. This echoes the implications of the cyber-enabled disruption of the functioning of physical CNI assets for social and economic stability that were flagged in the earlier sections of this chapter, and thus the relationships visualized by Figure 11.1 between CNI (including DPI) cyber resilience, economic development, growth, as well as the trust and confidence of citizens generally in democratic institutions, and the implementation of the UNGA-approved cyber norms.
India’s G20 Sherpa, Amitabh Kant, pointed to how “Cybersecurity is paramount and will be the key to the future,” including in managing “the whole DPI framework” (Digital India, 2023, para. 7). In particular, India needs to be mindful of the systemic cyber risks to its DPI and the India Stack via critical software running on its DPI Backbones. This is because critical software is identified in terms of performing functions critical to the cyber resilience of the computer that the software is running on (National Institute of Standards and Technology, 2021, p. 10). NIST (2021, pp. 1–3) defines critical software:
as any software that has, or has direct software dependencies upon, one or more components with at least one of these attributes:
is designed to run with elevated privilege or manage privileges;
has direct or privileged access to networking or computing resources;
is designed to control access to data or operational technology;
performs a function critical to trust [that is, the cyber resilience of the device running the software]; or,
operates outside of normal trust boundaries with privileged access.
NIST specifies categories of such software, including access management, device security, operating systems, web browsers, remote access and scanning, and operational and network monitoring (National Institute of Standards and Technology, 2021, pp. 3–8). Given these factors, critical software represents an attractive attack vector for malicious cyber actors to leverage in targeting DPI Backbones.
In this context, India must take steps to appropriately incentivize the vendors of critical software running on DPI Backbones to invest in the security of their software development life cycles (SDLCs), that is, the “formal or informal methodology for designing, creating, and maintaining software” (Souppaya, Scarfone, & Dodson, 2022, p. 1).
Each SDLC can be viewed as being vulnerable to eight categories of threat vector that malicious cyber actors can exploit in order to, for instance, insert malware and thus compromise the, in this case, critical software produced by that SDLC (SLSA, 2023). These threat vectors in each SDLC are multiplied across each developer and vendor populating the software supply chain(s) for that critical software product. In general, a software supply chain comprises the “entire sequence of events that impacts software from the point of origin where it is designed and developed, to the point of end-use,” and each event can be a means of ultimately targeting the end-user of a piece of software (U.S. Department of Commerce & U.S. Department of Homeland Security, 2022, p. 34). If successful, that targeting is defined as a software supply chain attack, that is, the “compromise of software code … at any phase of the supply chain to infect an unsuspecting [end-user]” (Clancy et al., Reference Clancy, Ferraro, Martin, Pennington, Sledjeski and Wiener2021, p. 1; NTIA Multistakeholder Process on Software Component Transparency Framing Working Group, 2021, p. 24).
Given that critical software can be weaponized to directly compromise the cyber resilience of the computer running it, it becomes all the more crucial for India to take steps to incentivize the vendors of critical software deployed on DPI Backbones – not just ensure the security of their own SDLCs but also manage risks from their software supply chains. If they do not do so, they open up India’s DPI Backbones to software supply chain attacks and thus make the delivery of a range of essential services to hundreds of millions of Indians by Indian DPI ever more prone to disruption.
The imperative for India to drive critical software vendors to perform software supply chain risk management (SCRM) in particular is enlivened by the increasingly hostile threat environment for software supply chains. Herr et al. (Reference Herr, Lee, Loomis and Scott2020, p. 8), for example, have compiled a list of eighty-two software supply chain attacks, involving different elements of software supply chains, from 2010 to 2020. Similarly, Australian agencies have warned that threat actors are “increasingly looking to compromise multiple victims across a range of sectors via a single entry point” and targeting “products commonly found in ICT supply chains” (Australian Cyber Security Centre, 2022, p. 65; Australian Signals Directorate, 2023, p. 53). The Cybersecurity and Infrastructure Security Agency (2021b, p. 5) similarly pointed to the threat to software supply chains from highly sophisticated threat actors. The Agence Nationale de la Sécurité des Systèmes d’Information (2023, p. 30) referred to the targeting of software supply chains as “[continuing] to pose a systemic risk.” The Quad, a diplomatic grouping of four democracies (Japan, Australia, India and the United States), signed joint principles for software security in May 2023 that began with an explicit recognition of “the security risks posed by lack of adequate controls to prevent tampering with the software supply chain by adversarial and non-adversarial threats” (Commonwealth of Australia et al., 2023, p. 2). Indeed, the severity of the threat landscape for software supply chains is underlined by, apart from the relative recency of these warnings from governments, the frankness of tone in these quotes (Nayyar, Reference Nayyar2023, p. 19).
The impetus for India to act with respect to combating critical software risk to its DPI Backbones, including the risk via the software supply chains of critical software vendors, is bolstered by the past few decades of instances where CNI assets around the world have been successfully targeted via critical software running on their systems. The United States and Israel exploited vulnerabilities in software controlling Iranian uranium centrifuges (in the Iranian defense industrial base) to sabotage them (Perlroth, Reference Perlroth2021, pp. 122–130). The Russian state manipulated supervisory control and data acquisition (SCADA) software used by three Ukrainian electricity utilities to open circuit breakers remotely, taking dozens of substations offline and causing blackouts that affected hundreds of thousands of citizens in 2015 (Cybersecurity & Infrastructure Security Agency, 2021a, paras. 2, 8–9; Zetter, Reference Zetter2016, pp. 2–3, 8, 19). Additionally, the Russian state gained access to the KA-SAT satellite internet network via, in part, a misconfigured virtual private network appliance to disrupt it at the start of the war in Ukraine (Blinken, Reference Blinken2022, para. 2; Viasat, Inc., 2022, para. 10). The NotPetya and WannaCry attacks both exploited vulnerabilities in the Windows operating system to cripple the Ukrainian government’s ability to function and the country’s CNI, and European and British CNI assets, respectively (Greenberg, Reference Greenberg2018, paras. 28, 34; Reference Greenberg2019, p. 1; Perlroth, Reference Perlroth2021, pp. 389, 402; Romine, Reference Romine2017, p. 3; Smart, Reference Smart2018, pp. 8, 10; Trautman and Ormerod, Reference Trautman and Ormerod2019, p. 528). Such attacks are possible with respect to India’s DPI Backbones, part of India’s CNI, given that they are entirely dependent on software to function. These attacks would also occur within the context of India being the second-most targeted of all countries by malicious cyber actors in 2021 and 2022, with attacks reported to have increased by 24.3 percent in 2022 (CloudSEK, 2023, pp. 8–9).
India has made some progress on the world stage in tackling software security risks, which includes tackling critical software security risks. As G20 President, it forged consensus among the world’s largest economies on the need to uplift software security. The G20 High-Level Principles to Support Businesses in Building Safety, Security, Resilience, and Trust in the Digital Economy (G20 High-Level Principles), endorsed by the G20 Leaders, include principles regarding “Security and Trust” (G20 Digital Economy Ministers, 2023, Annexure 2, para. 1; G20 Leaders, 2023, para. 57.i). As part of “develop[ing] a human-centric culture of security and trust in the digital economy,” these principles seek to promote a “security by design” approach in relation to digital technologies (G20 Digital Economy Ministers, 2023, Annexure 2, para. 1.c). Specifically for DPI, developed under India’s presidency and endorsed by the G20 Leaders, the G20 Framework for Systems of Digital Public Infrastructure (G20 DPI Framework) calls for stakeholders to incorporate “security features within the core design” (G20 Digital Economy Ministers, 2023, Annexure 1, para. 6.e; G20 Leaders, 2023, para. 56.i). Though these are political commitments with respect to building security into the digital economy and DPI, they are welcome. Fulfilling them would require the G20 countries – including the world’s largest software markets, and developer and innovation centers (see, e.g., U.S. Department of Commerce & U.S. Department of Homeland Security, 2022, p. 35) – to incentivize software developers and vendors in their jurisdictions to harden their SDLCs and perform robust software SCRM, having positive externalities for cyber resilience generally, not just that of India’s DPI Backbones. Given the relationships between DPI, human rights promotion, trust, and confidence in democratic institutions described by Figure 11.1, these positive externalities ultimately reinforce the resilience of democratic institutions and drive the implementation of UNGA-approved cyber norms.
Alongside work to incentivize the vendors of critical software running on DPI Backbones to invest in the security of their SDLCs, including to perform robust software SCRM, India must closely scrutinize cyber risk invited by the use of OSS. In general, this is because of the sheer pervasiveness of OSS in modern computing (European Commission, 2020, p. 2). It has been estimated that OSS comprises dependencies for 97 percent of all software (Scott et al., Reference Scott, Brackett, Herr and Hamin2023, p. 2). The India Stack itself uses OSS (Alonso et al., Reference Alonso, Bhojwani, Hanedar, Prihardini, Uña and Zhabska2023, p. 8). If one looks at critical software, per a 2020 assessment by the U.S. Department of Commerce of 389 American technology companies that produce “security-related products,” OSS was “frequently” incorporated in software running tools such as firewalls, logging programs, network management devices and (62 percent of) SCADA systems (National Institute of Standards and Technology, 2021, pp. 5–7; U.S. Department of Commerce & U.S. Department of Homeland Security, 2022, pp. 36–37). Furthermore, attack surfaces are multiplied by the sheer number of dependencies of commercial software’s OSS dependencies themselves; meaning that vendors using OSS in their products are inheriting risks transmitted through multiple tiers of OSS developers and maintainers, (potentially) resulting in those OSS dependencies being “deeply buried” within their products (Scott et al., Reference Scott, Brackett, Herr and Hamin2023, pp. 10–11). The sheer size of the resultant attack surface for critical software vendors that incorporate OSS into their products, therefore, should give urgency to scrutiny by India of OSS-borne risk to critical software and thus DPI Backbones.
Indeed, that attack surface is evident in the huge number of vulnerabilities in the OSS ecosystem and, therefore, in the dependencies of commercial software. Synopsys (2023, p. 7) found that 84 percent of 1,730 commercial codebases that it analyzed “contained at least one known open-source vulnerability” in 2022, a 4 percent increase on the equivalent figure yielded from analysis of over 2,409 codebases in 2021. Malicious cyber actors have seized on this, with Sonatype (2023, p. 4) finding that software supply chain attacks increased 633 percent year-on-year in 2022, marking a 742 percent average annual increase over the three years to 2023. An egregious example of malicious cyber actors seeking to exploit vulnerabilities in OSS is the exploitation of a critical vulnerability in the Log4j package, which was first seen in December 2021 (Silvers et al., Reference Silvers, Adkins, Alperovitch, Carlin, DeRusha and Inglis2022, p. 3). Merely five days after the vulnerability was disclosed, threat actors were observed trying to exploit it 400 times per second (Prince, Reference Prince2021, as cited in Silvers et al., Reference Silvers, Adkins, Alperovitch, Carlin, DeRusha and Inglis2022, p. 4). Such was the severity of the vulnerability, including because of its being a(n enigmatic) transitive dependency for an innumerable amount of OSS and commercial software, that it triggered a multinational response by public and private sectors alike, scrambling to “identify and mitigate hundreds of millions of potentially affected devices” (Silvers et al., Reference Silvers, Adkins, Alperovitch, Carlin, DeRusha and Inglis2022, pp. 5–9, 11). The Director of the Cybersecurity and Infrastructure Security Agency (CISA) went as far as terming the vulnerability as “one of the most serious I’ve seen in my entire career, if not the most serious” (Starks, Reference Starks2021, para. 1).
Therefore, the immense number of vulnerabilities in OSS ecosystems that are inherited by commercial software products, including critical software products that are deployed on DPI Backbones, as well as the greater intent of malicious cyber actors to exploit those vulnerabilities as part of software supply chain attacks makes it all the more vital for India to pay greater attention to the OSS risk to its DPI and the India Stack. While India should thus be applauded for the G20, under its presidency, recognizing “the importance of promoting open-source software” to enable (cross-border) DPI interoperability, India must not lose sight of the need to drive work to uplift the security of OSS, in line with the G20 calling for security to be built into the digital economy and DPI, as flagged earlier (G20 Digital Economy Ministers, 2023, para. 9). Its resilience as a democracy, the world’s largest, is at stake, given the criticality of its DPI generally (as CNI) to Indians’ trust in democratic institutions and the democratic mode of government as the ideal means to advance their economic development, growth and human rights, including their democratic rights, per Figure 11.1 (Guterres, Reference Guterres2022, para. 10). Similarly, critical software- and OSS-borne risks to India’s DPI Backbones threaten its efforts to implement the UNGA-approved cyber norms outlined in the opening paragraphs of this chapter.
These are the sort of issues that India must especially be mindful when it looks to share its open standards-based approach to DPI through the India Stack, particularly with democratic partners in the Global South. It is imperative that India helps fellow democracies deploy DPI in a cyber-resilient manner that makes it harder for malicious actors to weaponize their technological dependencies and undermine citizens’ trust and confidence in their democratic modes of government, as detailed in the next section.
Taking India’s Approach to DPI and the India Stack to the World
Given the success of India’s DPI in advancing India’s economic development (as detailed in the introduction to the India Stack and Indian DPI), India’s exporting its approach can enable other countries, especially democracies in the Global South, to benefit like it has. These benefits include the strengthening of the trust and confidence of their citizens in democracy as a system of government to drive economic development and growth, per the relationships described in Figure 11.1. This will be reinforced by advice from India on how to deploy DPI in a cyber-resilient manner, informed by the G20 High-Level Principles and G20 DPI Framework enacted under India’s G20 presidency, and especially if India implements the recommendations in the immediately preceding section of this chapter (“Recommendations”) on prosecuting systemic cyber risks from critical software. India, the world’s largest democracy, can thus enable its fellow democracies, especially among developing countries, to implement the UNGA-approved cyber norms, per Figure 11.1, and better ensure their national (cyber) resilience amid a deteriorating threat landscape detailed in the previous section of this chapter (Guterres, Reference Guterres2022, para. 10).
India’s experience as G20 President holds it in good stead. DPI was among its core priorities, including the facilitation of more sharing of expertise on DPI (Ministry of External Affairs, 2022c, paras. 35–37). India followed this up by forging “groundbreaking” international consensus on DPI (Gates, Reference Gates2023). It successfully pushed for the creation of the G20 DPI Framework, endorsed by the G20 Leaders (as mentioned in the previous section of this chapter) who also “recognize[d] that safe, secure, trusted, accountable and inclusive digital public infrastructure, respectful of human rights, personal data, privacy and intellectual property rights can foster resilience, and enable service delivery and innovation” (G20 Leaders, 2023, para. 56). Indeed, the Indian G20 presidency produced the first “multilaterally agreed language and detail on DPI” (Chaudhuri, Reference Chaudhuri2023, para. 14). With governments and donors around the world seeking to invest in DPI deployment (versus little prior multilateral work on DPI), the championing of DPI by the G20 in 2023 was pathbreaking, a “centerpiece of India’s G20 presidency,” and has helped forge India’s “digital revolution” as a source of soft power (Chaudhuri, Reference Chaudhuri2023, paras. 11, 16; Mehta, Reference Mehta2023, para. 2). India is indeed poised to share its approach, particularly with democratic developing countries for them to, as the Indian prime minister put it, “unlock the power of inclusive growth” (Modi, Reference Modi2023, para. 21), which would enable them to better promote their democratic resilience, given in Figure 11.1.
Indeed, Indian efforts will be aided by New Delhi’s general reputation as a trusted development partner, particularly for the Global South (Banerji, Reference Banerji2023).
Firstly, its G20 presidency was and wider foreign policy is animated by “Vasudhaiva Kutumbakam,” a Sanskrit phrase from the ślokas [VI.71–73] of the ancient Hindu text, the Maha Upanishad, which roughly translates to “the world is one family” (Ministry of External Affairs, 2022b, para. 21; Vivekananda International Foundation, 2019, p. 7). One of the priorities of its G20 presidency was “accelerated, inclusive and resilient growth” (Ministry of External Affairs, 2022b, para. 32).
Secondly, as part of its prioritization of inclusive growth and development, India devoted special attention to uplifting that of the Global South and its role in international governance. India hosted two virtual “Voice of the Global South Summits” where countries from the Global South provided their views on matters such as economic development and DPGs, matters that go directly to the promotion of human rights and democratic resilience, given in Figure 11.1 (Government of India, 2023b; Ministry of External Affairs, 2023a, paras. 1–2, 9–17). As G20 President, India placed the “Global South squarely at the center of the global governance agenda,” having successfully proposed the permanent G20 membership of the African Union, which it considers “the most satisfying outcome” of its Presidency (Jaishankar, Reference Jaishankar2023a, para. 5; Pant, Reference Pant2023, paras. 1–2). As the Indian External Affairs Minister put it, India pushed for solutions to global challenges from the Global South itself as G20 President (Jaishankar, Reference Jaishankar2023a, para. 4). This is combined with the Indian prime minister “firmly” believing that “that the success of our G20 presidency is the success of the Global South” (Jaishankar, Reference Jaishankar2023b, para. 6).
Thirdly, India has an excellent reputation for public goods delivery, for instance, as a trusted health care partner. Under its COVID-19 vaccine diplomacy initiative, “Vaccine Maitri,” it gifted over 14.85 million doses of locally manufactured COVID-19 vaccines to fifty countries and provided close to 51.52 million doses to forty-eight countries in the Global South under the COVAX facility until November 30, 2022 (Ministry of External Affairs, 2023b, p. 309). Jamaica actually received its first batch of COVID-19 vaccines from India, with the Jamaican Foreign Minister saying that, “From the very onset, India was a reliable partner whose assistance was critical to our pandemic response” (ANI, 2022a, paras. 2–3). Similarly, India co-led with South Africa the international push for a waiver of intellectual property rights for COVID-19 vaccines and is now working to extend the waiver to drugs used to treat the disease (Kumar, Reference Kumar2021, para. 1; The Pharma Letter, 2023).
These factors add to the attractiveness of India’s offering in terms of DPI and the India Stack, especially for fellow democracies in the Global South. The goodwill India thus enjoys will be reinforced by how it is not looking to weaponize the India Stack. Rather, India seeks to empower other countries to use an open standards-based approach to the digital delivery of essential services, which prioritizes cyber resilience, per the G20 High-Level Principles and G20 DPI Framework enacted under India’s G20 presidency and if India implements the Recommendations. As seen with CoWIN (Sansad TV, 2023), recipient countries are free to customize and build with the India Stack and India’s approach as they see fit, “no strings attached,” to quote the head and co-founder of the Digital India Foundation (Sansad TV, 2023). With the international community confident in DPI being “a critical accelerator” of the achievement of the Sustainable Development Goals (Gates, Reference Gates2023), the Indian approach to DPI will enable recipient democracies, especially in the Global South, to promote their citizens’ faith in democracy itself as a means of driving their economic development, growth, and thus preservation of their human rights, combined with implementing UNGA-approved cyber norms, as per Figure 11.1.
This is in stark contrast with China’s Belt and Road Initiative and Digital Silk Road that constitute the blatant weaponization of Chinese technology vendors to influence and/or interfere in the affairs of recipient countries (Lewis, Reference Lewis2023, p. 7). Instead, India, the world’s largest democracy, seeks to help these countries build their digital sovereignty and strengthen their overall democratic resilience through robust, cyber-resilient DPI and DPI Backbones in line with their needs, rather than “weaponise their interdependence” with the India Stack (Guterres, Reference Guterres2022, para. 10; Narlikar, Reference Narlikar, Dresser, Farrell and Newman2021, pp. 289–290). This echoes how India characterizes its development cooperation with the Global South as “demand driven, outcome oriented, transparent and sustainable” (Jaishankar, Reference Jaishankar2023b, para. 9).
Further differentiating India’s efforts from China’s on digital economy diplomacy is its bilateral cooperation with leading democratic partners on DPI projects (in the Global South), such as the European Union, United States, France, Germany and Japan (Choudhury, Reference Choudhury2023, paras. 20, 28; European Union & Government of India, 2023, p. 2; Government of India & Government of France, 2023, para. I.6.1.7; Ministry of External Affairs, 2023c, para. 3; The White House & Government of India, 2023; The White House & Government of India, 2024 para. 39, para. 37; Government of India & Republic of Germany, 2024, para. 32). India and the United States have incorporated DPI into their bilateral Initiative on Critical and Emerging Technologies (Government of India, 2023c, para. 5). Both countries also cooperate on DPI with the Republic of Korea through a “trilateral technology dialogue,” looking to “expand cooperation on … [among other things,] delivering technology solutions for the broader Indo-Pacific region” (U.S. Mission Korea, 2024, paras. 1–2). Furthermore, the world’s largest democracy and France’s championing of DPI as an enabler for “open, free, democratic and inclusive digital economies and digital societies” (Government of India & Government of France, 2023, para. I.6.1.7; Guterres, Reference Guterres2022, para. 10) will help project India as a trusted, democratic technology partner for fellow developing country democracies that look to bolster their citizens’ faith in democracy and democratic institutions as drivers of economic development and uphold UNGA-approved cyber norms by developing and deploying robust, cyber-resilient DPI, per Figure 11.1. India’s reputation as such is key to the success of its work to share its DPI expertise because “trust and transparency” are valuable commodities in foreign policy, especially in digital economy diplomacy, as defined by the Indian External Affairs Minister (Ministry of External Affairs, 2022b).
Indeed, amid technological contestation and balkanization, those very commodities will underwrite India’s efforts to lead “an international architecture of collaboration” on DPI (Carnegie India, 2022; Ministry of External Affairs, 2022b). India has established the Global Digital Public Infrastructure Repository (GDPIR, welcomed by the G20 Leaders), a virtual library for DPI that countries and organizations have deployed at scale (at the time of writing, the GDPIR features over fifty projects from sixteen countries) (G20 Leaders, 2023, para. 56.ii; Ministry of Electronics and Information Technology, 2023, para. 3; https://dpi.global). India has created the One Future Alliance (OFA), a voluntary capacity building initiative for DPI targeted at low- and middle-income countries (LMICs) (G20 Leaders, 2023, para. 56.iii). India’s credibility is also enhanced by its forming a Social Impact Fund (SIF), having pledged 25 million dollars to this government-led, multistakeholder capacity-building and financing initiative for DPI deployment in the Global South (Ministry of Electronics and Information Technology, 2023, para. 4).
The GDPIR, OFA, and SIF can underpin an India-anchored international architecture for DPI. The GDPIR provides a knowledge base for DPI. The OFA can structure DPI internationalization (Choudhury, Reference Choudhury2023, para. 26), financially backed by the Global South-focused SIF. In potentially leading the OFA and SIF and already hosting the GDPIR, India can build on its credentials as a trusted hub for DPI expertise while working with its democratic partners, as mentioned earlier, and other stakeholders such as industry and academia to streamline DPI capacity-building efforts around the world and thus, at least indirectly, streamline the promotion of democracy as a sustainable means of driving economic development (Choudhury, Reference Choudhury2023, para. 27; see Figure 11.1). India can also include the Global South in its DPI diplomacy by linking the OFA, GDPIR, and SIF with the DAKSHIN “global centre of excellence dedicated to [the Global South],” which the Indian prime minister launched in November 2023 (DD News, Reference Banerji2023, paras. 1–2). Such efforts would most likely implement the vision of the G20 Digital Economy Ministers (2023, para. 10) under the Indian presidency, which called for coordinated capacity building, technical assistance as well as “global multistakeholder approaches … for implementing robust, inclusive, human-centric, and sustainable DPI in LMICs.”
India can weave its aforementioned bilateral cooperation with democratic partners on DPI – including through a U.S.–India Global Digital Development Partnership suggested by both countries in June 2023 – as well as its aforementioned trilateral cooperation with the United States and Republic of Korea into its multilateral DPI diplomacy via the OFA and SIF so that the latter initiatives receive further credibility from the backing of major democracies (The White House & Government of India, 2023, para. 39). An example of such crossover from bilateral or trilateral cooperation to multilateral cooperation is how, in January 2024, France “expressed its support to join [OFA]” in order to “further synergize global efforts on building DPI capacities” (Government of India & Government of France, 2024, para. 24). In terms of concrete multilateral cooperation, India itself has pledged $10 million to the World Health Organization’s Global Initiative on Digital Health, which includes capacity building for countries in the Indo-Pacific region seeking to adopt India’s approach to DPI to aid cancer diagnosis and treatment (Commonwealth of Australia, Government of India, Government of Japan, & United States Government, 2024a, para. 9).
Indeed, such cooperation would be aided by increased momentum for DPI cooperation at the multilateral (and multistakeholder) level more broadly since India’s G20 presidency. The following are key examples of this momentum in 2024 alone:
In September, the UNGA approved the Global Digital Compact (United Nations, General Assembly, 2024, Annex I), which recognized: DPGs and “Resilient, safe, inclusive and interoperable” DPI as “key drivers of inclusive digital transformation and innovation”; and the necessity for greater investment in their deployment (United Nations, General Assembly, 2024, Annex I, paras. 14–16). The UNGA even committed to, among other things, establish safeguards “for inclusive, responsible, safe, secure and user-centred [DPI],” and grow finance for DPGs and DPI development (particularly in developing countries) by 2030 (United Nations, General Assembly, 2024, Annex I, paras. 17(c), (e)).
In October, the “Global DPI Summit 2024” featured representatives across stakeholder groups from over 100 countries (Global DPI Summit 2024, 2024, para. 1). In addition to highlighting “the extraordinary progress” in the rollout of DPI around the world, attendees called for greater collaboration in areas like ecosystem-building, “safeguards to ensure that DPI is people-centric, transparent, accountable, and fair of all users”; ensuring adequate “support for the widespread implementation of DPI”; and driving adoption of robust technical standards to drive better interoperability and security of DPI (Backbones) (Global DPI Summit 2024, 2024, paras. 2, 5–8, 11–12, 15–18).
In November, India cosponsored the Declaration on Digital Public Infrastructure, AI and Data for Governance with Brazil (G20 President for 2024) and South Africa (G20 President for 2025), which was endorsed by several G20 governments (Government of India, Government of Brazil, & Government of South Africa, 2024).
Such momentum certainly helps India’s cause as it seeks to lead international efforts to develop and adopt DPI. That world leaders, particularly through the UN, endorsed the value of DPGs and DPI as instruments of economic and digital development, and called for greater investment in their development and deployment is highly consequential. After all, consensus at the UN level not merely validates but also significantly builds upon the pioneering multilateral consensus engineered by New Delhi on DPI and DPGs as G20 President in 2023.
Furthermore, India’s being a trusted DPI partner and its leadership in DPI development and deployment will be aided by its work through the Quad. In September 2024, the Quad released the Quad Principles for Development and Deployment of Digital Public Infrastructure, including inclusivity, security (such as by incorporating “security features into the core design to ensure … resilience”) and “Governance for Public Benefit, Trust, and Transparency” (Commonwealth of Australia, Government of India, Government of Japan, & United States Government, 2024b, paras. 3.i, v, vii).
This is combined with the work of the Quad on uplifting software security, particularly through the Joint Principles for Software Security that it released in May 2023 and that seek to “promote and strengthen a culture where software security is by design and default” (Quad Senior Cyber Group, 2023, p. 2). These joint principles lay down “high-level secure software development practices” that echo the recommendations of NIST (Quad Senior Cyber Group, 2023, p. 2; Souppaya, Scarfone & Dodson, 2022, p. 4). Apart from committing to acquire software from vendors meeting these practices (thereby pledging to use their collective purchasing power to uplift software security), the Quad countries aim to “where necessary … build policy frameworks” that “encourage” software developers and suppliers to follow said software development practices (Quad Senior Cyber Group, 2023, p. 2).
Given the fundamental dependence of any DPI and DPGs on secure software (as mentioned earlier), work by the Quad to uplift software security under these joint principles underpins the ability of India to implement the Recommendations at home and help build and deploy cyber-resilient DPI in other (Global South) democracies; thereby helping the latter boost the confidence of their citizens in democratic institutions as effective vehicles for their economic development, per Figure 11.1. This is particularly significant in light of the deteriorating threat landscape for software supply chains, as per the previous section of this chapter on prosecuting systemic cyber risks. Given that DPI Backbones are CNI assets (see the opening paragraphs of this chapter), India’s work to uplift software security through the Quad would also help fellow democracies implement the UNGA-approved cyber norms, as well as deliver public goods for their people such as national cyber resilience and national security (Nayyar, Reference Nayyar2023, pp. 28–30).
One should note that India has already taken several concrete steps to export its approach to DPI.
For example, it cooperates with a number of countries when it comes to internationalizing the UPI.
As part of its engagement with over thirty countries, including Japan, Australia, and Saudi Arabia, the NPCI has connected the UPI with France’s Lyra Network, as well as the Bhutanese, Mauritian, Singaporean, and Sri Lankan payments networks (Ministry of Commerce & Industry, 2022, para. 6; Rudra, Reference Rudra2023, para. 1; Sibal, Reference Sibal2022, Reference Sibal2023, paras. 1–3; Wadhwa, Amla, & Salkever, Reference Wadhwa, Amla and Salkever2022, para. 2; Prime Minister’s Office, 2023; Ministry of External Affairs, 2024a, para. 1; PIB India, 2024).
The Reserve Bank of India cofounded Project Nexus, a multilateral program to link the national ‘Fast Payments Systems’ of India with Malaysia, Philippines, Singapore, and Thailand, in June 2024, and it is expected to be launched by 2026 (Pancholy, Reference Pancholy2024, paras. 1–4).
The Indian and Emirati governments have signed agreements on “cooperation in digital infrastructure projects” and on linking the UPI with the United Arab Emirates’ (UAE’s) AANI payments network, while UPI payments can already be made through the PhonePe payments app at Mashreq’s NEOPAY Terminals in the UAE (Ministry of External Affairs, 2024b, para. 3; HT News Desk, 2024).
Just in October 2024, the Maldivian president decided to establish “a consortium to introduce [the] UPI in the Maldives” (The President’s Office, 2024, para. 3).
India’s economic statecraft with respect to exporting the UPI also features the work of the NPCI International (the “international arm” of the NPCI) with a growing number of overseas partners.
The body signed a memorandum of understanding (MoU) with Google Pay India to expand the acceptance of UPI payments outside India; export the approach to digital payments that the UPI represents; and “[ease] the process of remittances between countries” through use of the UPI’s infrastructure (NPCI International, 2024a, paras. 2, 8).
It signed an MoU with the major Greek bank, Eurobank, to create a “strategic alliance” for upgrading India–Greece payments flows and “streamlining remittances from Greece to India” using UPI technology (NPCI International, 2024b, paras. 1, 3).
It partnered with the Bank of Namibia and Central Reserve Bank of Peru to help their respective countries develop and deploy UPI analogues (NPCI International, 2024c; NPCI International, 2024d).
If one moves from payments to health care, CoWIN’s success as a component of India’s DPI to fight the pandemic attracted so much international interest that India’s virtual “CoWIN Global Conclave” was attended by representatives from 142 countries (Chandna, Reference Chandna2021; Ministry of Health and Family Welfare, 2021). By early February 2023, India had open-sourced CoWIN and shared it with over twenty-seven countries, enabling them to customize the platform as per their needs (Sansad TV, 2023).
When it comes to digital identity, the Philippines and Morocco became the first countries to adopt an Aadhaar-based digital identity system in March 2023 (Lele, Reference Lele2023).
In terms of sharing India’s approach to DPI more generally, there have been a number of such initiatives in train in the past few years alone.
India signed MoUs with Armenia, Sierra Leone, Suriname, and Antigua and Barbuda on building DPI based on the India Stack (Kant & Sudke, Reference Kant and Sudke2023, para. 9).
Sri Lanka plans to tailor India’s DPI approach to enable the “effective and efficient delivery of citizen-centric services” (Government of India & Government of Sri Lanka, 2023, para. 4.IV.e). This is in addition to both countries agreeing to establish a Joint Working Group on implementing DPI, including by adapting India’s DigiLocker and Aadhaar for Sri Lanka (Government of India & Government of Sri Lanka, 2024, paras. 15.iii, v).
Google India and the EkStep Foundation launched “DPI in a Box,” an initiative providing countries with “a readily deployable model” of India’s DPI, including Aadhaar (Google India Team, 2024, paras. 13–18).
In September 2024, the Secretary of Papua New Guinea’s (PNG’s) Department of Information Communication Technology observed that PNG’s bilateral cooperation with India informed the former’s own projects on digital identity, payments, and building DPI ecosystems (ASPICanberra, 2024). The Secretary stressed the need for development partners more generally to see the value in (financing) DPI projects (ASPICanberra, 2024).
India and the Maldives agreed to cooperate on DPI, including by enabling the Maldives to adopt an Aadhaar analogue (Government of India & Republic of Maldives, 2024, para. 4.IV.ii).
In light of factors such as India’s expertise and experience in DPI development and deployment, its growing cooperation with international partners in a variety of contexts, and the strengthening multilateral and multistakeholder consensus on the need for greater investment in DPI around the world, the world looks to India to lead. Indeed, India is well-positioned to lead the way on DPI, particularly in advising fellow democracies (in the Global South) on how to develop and deploy cyber-resilient DPI solutions and thus implement the UNGA-approved cyber norms (see Figure 11.1). In doing so, India will not only help fellow democracies better assure their national (cyber) resilience amid a worsening risk landscape but also help them strengthen the faith of their own polities in democracy as a force multiplier for their own development, much like DPI itself.
Conclusion
This chapter began by defining democracy, including with reference to international law, as the right of a people to shape their collective destiny, as well as including specific human rights such as peaceful assembly, expression, and association. It pointed to the relationship between economic development, economic growth, and the promotion of human rights, including democratic rights. Threats to the ability of democratic institutions such as elected governments to drive their citizens’ economic development and thus promote their human rights are threats to their democracies themselves, given that the manifestation of these threats can undermine their citizens’ confidence in democracy as a system of government under which they can flourish. In this context, the cyber resilience of CNI assets, which enable and preserve economic development, was argued to be a pillar of their democracies and indeed their governments’ ability to implement UNGA-approved cyber norms with respect to human rights and CNI protection. The example of DPI Backbones as CNI assets provided the lens for this chapter on defending democracy in a digital age featuring a worsening cyber threat landscape, with India’s approach to DPI and the India Stack serving as a case study.
After introducing DPI and DPGs (the essential ingredients for DPI) as enablers for the delivery of digitally native essential services at scale via open standards-based paradigms unconstrained by the closed paradigms of private technology companies, this chapter explored the DPI of the world’s largest democracy and the India Stack. It highlighted the sheer quantum of service delivery by India’s DPI, built on the three layers of the India Stack: identify, payments, and data. India’s DPI has accelerated financial inclusion, empowered citizens through reliable digital public services, greatly reduced leakages of social security expenditures, and made its COVID-19 vaccination program quite efficient and transparent. In this respect, India’s DPI, built atop the India Stack, is crucial to the world’s largest democracy driving the economic development of its people and thus their confidence in their democratic system of government as the ideal means of advancing their interests, attracting the praise of world leaders and international organizations alike.
Given that DPI and DPGs are digitally native, and Indians are extremely dependent on the latter for a number of digitally native essential services, the chapter then called for India to prosecute systemic cyber risks to its DPI Backbones that stem from the critical software running on them. With India’s G20 presidency forging multilateral consensus on the foundational importance of security to the digital economy, there is a need for India to incentivize the vendors of critical software to invest in the security of their SDLCs, perform robust software SCRM, and manage OSS risks appropriately. India has made progress in doing so, including as G20 President. The G20 DPI Framework and G20 High-Level Principles, endorsed by the G20 Leaders in 2023 and calling for security to be built into all software and DPI alike, are a testament to India’s efforts.
The chapter concluded with a discussion of how India, the world’s largest democracy, can export its approach to DPI and the India Stack, especially to fellow democracies in the Global South. Given India’s success in advancing its citizens’ economic development through DPI and the India Stack, other democracies stand to gain from what it has learned, including by strengthening the trust and confidence of their citizens in democracy as a system of government to drive their development and growth, as well as implementing the UNGA-approved cyber norms. This will be reinforced by advice from India on how to deploy DPI in a cyber-resilient manner, informed by the G20 High-Level Principles and G20 DPI Framework and if India implements the Recommendations, all the more important amid a deteriorating cyber threat landscape.
India is poised to share its approach to DPI and the India Stack for a number of reasons, particularly that: its G20 presidency forged the first multilateral consensus on DPI policy; India is known for being a trusted development partner (for the Global South), which continues to be enhanced by how it is not seeking to weaponize its DPI expertise and the India Stack against smaller democracies (in the Global South); and India is increasingly cooperating on DPI (in the Global South) with leading democracies, which lends its own efforts more credibility. This is backed by the building of momentum at the multilateral and multistakeholder levels since India’s G20 presidency on the value of DPI as an instrument of economic and digital development, and the need for greater investment in the development and deployment of DPI around the world.
These factors hold the world’s largest democracy in good stead also in its plans to lead “an international architecture of collaboration” on DPI (Carnegie India, 2022) through initiatives such as the OFA, SIF, and GDPIR, potentially integrated with the DAKSHIN global center of excellence for the Global South. Its leadership and partnership with other countries will be aided by its work through the Quad on DPI and on uplifting software security in general, given that DPI and DPGs are digitally native, and this work will help all countries implement the UNGA-approved cyber norms, as well as deliver public goods for their people such as national cyber resilience and national security. This is complemented by India’s strides in exporting its approach to DPI, including through the linkage of the UPI with overseas payments networks, the sharing of an open-sourced version of CoWIN with a few dozen countries for their vaccination programs, as well as its agreeing to work bilaterally with a range of Global South countries on their development and deploying DPI.
Whether it is deploying DPI at home in a cyber-resilient manner or capacity-building fellow democracies on how to do so in order to bolster their polities’ confidence in democracy itself, the world looks to India to lead.
And the world is indeed India’s oyster.
Cyber Threats and Authoritarian Regimes: Challenges to Defending Cybersecurity
The disruptive and damaging cyber threats and attacks carried out by authoritarian regimes and their proxies toward both domestic and international entities is a long-standing issue, although one that has become increasingly prominent in recent years (e.g., Bradshaw & Howard, Reference Bradshaw and Howard2019; Morgan, Reference Morgan2018; Schünemann, Reference Schünemann, Cavelty and Wenger2022; Woolley & Howard, Reference Woolley and Howard2017). Scholars have examined diverse cyber threats and attacks ranging from, for instance, internet shutdowns (e.g., Majeed, Reference Majeed2022; Mare, Reference Mare2020) to opinion manipulation (e.g., Alyukov, Reference Alyukov2022; Bradshaw & Howard, Reference Woolley and Howard2017; King, Pan, & Roberts, Reference King, Pan and Roberts2017), from content blockage and filtering (e.g., Greitens, Reference Greitens2013; Roberts, Reference Roberts2018; Ververis, Marguel, & Fabian, Reference Ververis, Marguel and Fabian2020) to fake news and disinformation campaigns (e.g., Abrahams & Leber, Reference Abrahams and Leber2021; Jones, Reference Jones2022). As authoritarian regimes are actively shaping cyberspace at home and on the global stage to their own strategic advantage, understanding their cyber threats not only explains their resilience but, more importantly, monitors their capacity for resurgence (Deibert, Reference Deibert2015, p. 64, emphasis as in original) and subsequent challenges to cybersecurity.
Among authoritarian countries, China’s cyber or internet policies and threats have attracted significant attention, especially in recent years (e.g., Hung & Hung, Reference Hung and Hung2022; Lindsay, Cheung, & Reveron, Reference Lindsay, Cheung and Reveron2015; Myers & Mozur, Reference Myers and Mozur2019; Shackelford et al., Reference Shackelford, Raymond, Stemler and Loyle2020). Without denying their contributions, extant scholarship still has not captured the full gamut of cyber policies and politics that could generate threats to democracies. More specifically, existing studies have either presented the overall picture of internet governance and policies in China (e.g., Miao, Jiang, & Pang, Reference Miao, Jiang and Pang2021; F. Yang & Mueller, Reference Yang and Mueller2014) or elaborated on individual policies and regulations that cater to specific subject matter (e.g., Lindsay, Cheung, & Reveron, Reference Lindsay, Cheung and Reveron2015; Qi, Shao, & Zheng, Reference Qi, Shao and Zheng2018). Few studies, however, have analyzed the media narratives and discourses of those policies and regulations that cater to specific subject matter and that bring about a broader influence on society beyond the policy domain. To fill this gap, this chapter tracks the media narrative and discourse centered on the term “cybersecurity” to uncover how China, “a latecomer on the global cybersecurity scene” (Qi, Shao, & Zheng, Reference Qi, Shao and Zheng2018, p. 1343), has oriented the domestic discursive constructions of cybersecurity toward the perceived national context of the audience.
In the following sections, I first present a general review of studies on China’s internet policies. Second, I explain “domestication” (Clausen, Reference Clausen2004) as the theoretical framework to scrutinize the media narrative of cybersecurity in China’s internet policy. The third section presents the method for data collection and computer-assisted semantic network analysis as the analytical strategy. In the fourth section, the analysis and discussion explain how the discourse on cybersecurity in media coverage legitimizes the regime’s role in enacting internet policy, including its control over the free flow of information, while accusing democracies of cyberattacks. The chapter concludes with implications for the domestication of cybersecurity and challenges to cybersecurity.
Literature Review
Since China established its connection to the internet in 1994, internet policies have become one of the key topics in the debate on the transformation created by the internet to (the authoritarian) China (e.g., Taubman, Reference Taubman1998; Wu, Reference Wu1996). Despite the then popular utopian vision of the internet (or cyberspace) as a facilitator of a radically democratic form of participation from its inception (Barlow, Reference Barlow2019), several scholars remind us that from the beginning of its connection with the internet, the party-state in China was engaged in “a multipronged effort” (Taubman, Reference Taubman1998, p. 267) with resources and regulations to prevent the internet from being “a disruptive force in the domestic area” (Taubman, Reference Taubman1998, p. 268), with the mostly well-known tactic being to build a self-contained intranet national network with a firewall (Barme & Ye, Reference Barme and Ye1997; Griffiths, Reference Griffiths2021).
Generally speaking, extant scholarship has investigated China’s internet policies and politics from both a top-down and a bottom-up approach. The top-down approach encompasses macro-level policy, as well as the legal and technological aspects of internet control, surveillance, and censorship (e.g., Deibert, Reference Deibert2002; MacKinnon, Reference MacKinnon2009; Roberts, Reference Roberts2018; Xiaoming, Zhang, & Yu, Reference Xiaoming, Zhang and Yu1996). For instance, some studies have scrutinized regulatory control (e.g., Esarey & Kluver, Reference Esarey and Kluver2014; MacKinnon, Reference MacKinnon2008, Reference MacKinnon2009; Pan, Reference Pan2017), while others have looked at the technical infrastructure of the “Great Firewall of China” (Barme & Ye, Reference Barme and Ye1997), such as techniques of filtering, domain name system poisoning, and virtual private network blocking (e.g., Clayton, Murdoch, & Watson, Reference Clayton, Murdoch and Watson2007; Deibert et al., Reference Deibert, Palfrey, Rohozinski and Zittrain2008; Lowe, Winters, & Marcus, Reference Lowe, Winters and Marcus2007; for a review, see, e.g., Keremoğlu & Weidmann, Reference Keremoğlu and Weidmann2020). Some, like Pan (Reference Pan2017), have revealed how the dominance of domestic internet platforms that “comply with China’s censorship requirements” in the market allows “the Chinese regime to engage in content censorship that quickly removes online content pertaining to collective action while retaining a great deal of information, including criticisms of the government, online” (p. 182; see also, e.g., Weber & Jia, Reference Weber and Jia2007). Others, such as Liang and Lu (Reference Liang and Lu2010), with their term “multidimensional regulatory system,” have offered a holistic view to describe multiple agencies with various laws and regulations to establish comprehensive control over the infrastructure and commercial and social use of the internet in China.
The bottom-up approach, which can mostly be seen in studies on internet control and censorship, involves understanding the mechanisms behind censorship by analyzing which elements are censored or not (e.g., Bamman, O’Connor, & Smith, Reference Bamman, O’Connor and Smith2012; Crandall et al., Reference Crandall, Crete-Nishihata, Knockel, McKune, Senft and Tseng2013; King, Pan, & Roberts, Reference King, Pan and Roberts2013, Reference King, Pan and Roberts2017; Liu & Zhao, Reference Liu and Zhao2021; Qin, Strömberg, & Wu, Reference Qin, Strömberg and Wu2017). In bottom-up studies, large-scale datasets are harvested through application programming interfaces and analyzed via complex computational and statistical approaches to depict nuanced pictures of how the control, surveillance, manipulation, and censorship program is implemented on the ground (e.g., King, Pan, & Roberts, Reference King, Pan and Roberts2013, Reference King, Pan and Roberts2017; Liu & Zhao, Reference Liu and Zhao2021; Lu & Pan, Reference Lu and Pan2021; Roberts, Reference Roberts2018). Yet, without denying its significance, the bottom-up approach suffers from the lack of a holistic view of policies, as well as the discourse and narratives revolving around these policies, that would better draw a full picture of the evolving internet politics in China.
In recent years, more and more studies have looked beyond policy as relatively stable to track a combination of policy change and policy stability in the internet policies, given the complexity of “allowing access and giving a fair amount of freedom to non-political and less-political information exchange in this country … [while] resisting with firewall solutions and regulations” (Zhang, Reference Zhang2006, p. 285) in China. F. Yang and Mueller (Reference Yang and Mueller2014), for example, used the content analysis approach with the laws and regulations on internet governance between 1994 and 2012 to track how these policies changed over time, identify different policymaking agencies, and ascertain various scopes of application and topical focuses for these policies. Among the many issues and themes, content regulation and cybersecurity have occupied significant and substantial parts of internet policies (57 percent and 41 percent, respectively, see F. Yang & Mueller, Reference Yang and Mueller2014, p. 458). Similarly, the study by Miao, Jiang, and Pang (Reference Miao, Jiang and Pang2021, p. 2021) on Chinese internet policies issued between 1994 and 2017 uncovered the paramount concern over cybersecurity that exemplifies the state’s “deep-seated insecurity over regime stability” in numerous internet regulations. Still, in comparison to the extensive scholarship on content regulation and moderation in China, as discussed earlier, the topic of cybersecurity seems to be relatively understudied, despite its emerging visibility in recent years (but see Lindsay, Cheung, & Reveron, Reference Lindsay, Cheung and Reveron2015; Mueller, Reference Mueller, Deibert, Palfrey, Rohozinski and Zittrain2011). This chapter aims to enrich the discussion with the concept of “domestication.”
Domestication: National Lens on Cross-border Events and Beyond
A key concept in journalistic practice, domestication (Gurevitch, Levy, & Roeh, Reference Gurevitch, Levy, Roeh, Dahlgren and Sparks1993) refers to the framing of foreign or international events to render them comprehensible, compatible, appealing, and relevant to national or local audiences (e.g., Alasuutari, Qadir, & Creutz, Reference Alasuutari, Qadir and Creutz2013; Clausen, Reference Clausen2004; Huiberts & Joye, Reference Huiberts and Joye2018). The fundamental idea, as Gurevitch, Levy, and Roeh (Reference Gurevitch, Levy, Roeh, Dahlgren and Sparks1993) explain, is that “the ‘same’ events are told in divergent ways, geared to the social and political frameworks and sensibilities of diverse domestic audiences” (p. 217, emphasis added). A substantial amount of work has been conducted to interrogate different discursively adaptive ways in which cross-border news is domesticated to make it suited for different audiences nationally and locally (e.g., Clausen, Reference Clausen2004; Joye, Reference Joye2015; Lee, Chan, & Zhou, Reference Lee, Chan and Zhou2011). Olausson (Reference Olausson2014) further differentiates three discursive modes of domestication: “(1) introverted domestication, which disconnects the domestic from the global; (2) extroverted domestication, which interconnects the domestic and the global; and (3) counter-domestication, a de-territorialized mode of reporting that lacks any domestic epicenter” (p. 715). In short, domestication entails a deliberate choice in how one talks about and understands the world.
The research on domestication beyond cross-border events has been expanded by a growing body of studies. Domestication as the process of discursive appropriation and transformation occurs not only in the genre of cross-border news, which has been extensively studied, but also in other genres, such as entertainment (Adamu, Reference Adamu2010), technology (Matassi, Boczkowski, & Mitchelstein, Reference Matassi, Boczkowski and Mitchelstein2019), education (Alasuutari & Alasuutari, Reference Alasuutari and Alasuutari2012), popular culture (H. Fu, Li, & Lee, Reference Fu, Li and Lee2023), and the implementation of exogenous policy (Alasuutari, Reference Alasuutari2009). H. Fu, Li, and Lee (Reference Fu, Li and Lee2023), for instance, expand the term to examine the process “in which a non-native cultural artifact or practice becomes embedded in and tamed by a techno-cultural arena in a receiver country” (p. 77). In other words, broadly speaking, domestication research should pay attention to the strategies that could generate resonance with national or local audiences and contexts beyond the genre of foreign news. By doing so, domestication not only makes events and artifacts meaningful for the domestic audience but it could also be utilized by various actors to reinforce “nation-state discourse and identity” (Olausson, Reference Olausson2014, p. 711).
Following this argument, in this study, we ask the following research question: How does news coverage domesticate “cybersecurity” in internet regulations and policies in China?
Methods
To answer the research question, this study explored the semantic meaning in media narratives and frames as units of analysis. Gamson and Modigliani (Reference Gamson and Modigliani1989) refer to such narratives and frames as “media packages” – that is, as “interpretive packages that give meaning to an issue” (p. 3). Media frames denote structured semantic representations of associated contextual and cultural information (Werner & Cornelissen, Reference Werner and Cornelissen2014). Examining media frames detects the background structure of a shared reality and identifies “the role of political culture and practices in stabilizing particular imaginaries” (Jasanoff & Kim, Reference Jasanoff and Kim2009, p. 121). Given the idea of “associative framing” (Ruigrok & van Atteveldt, Reference Ruigrok and van Atteveldt2007, p. 72), this study operationalizes media narratives and frames as complex patterns of associations between different concepts, with the main associations in a message being its “central organizing idea” (Gamson & Modigliani, Reference Gamson and Modigliani1989, p. 3). In other words, media narratives and frames involve not only the selection of concepts but also their mutual associations that stand for schemata of interpretation. Such associative framings and narratives – recognized as domestication, or discursive appropriation and transformation to local context – are therefore examined through semantic networks derived from the occurrences and co-occurrences of concepts.
Semantic Network Analysis
This study employed computer-assisted semantic network analysis to explore media narratives and frames of cybersecurity in news coverage in the Chinese mainland. With its origin in cognitive science, semantic network analysis argues that human memory contains a structural meaning system (Collins & Quillian, Reference Collins, Quillian, Tulving and Donaldson1972). Semantic network studies have thus suggested that the frequency, co-occurrence, and distances among words and concepts allow researchers to explore a text’s embedded meaning (Danowski, Reference Danowski1993; Doerfel, Reference Doerfel1998). This study adopted the word association (concept co-occurrence) method, which maps the relationships among words by indexing pairs of concepts. Extending beyond the standard content analysis of texts and frequencies of concepts, semantic network analysis reveals the manifest meaning structure of the text and thus indirectly represents the discursive appropriation among the text’s creators (Danowski, Reference Danowski1993). The analysis, with second-hand data from news sources, followed the research question proposed earlier.
Data Collection
To identify and collect data, we first performed an extensive search of news coverage in the Chinese mainland. Before 2017, China enacted several laws and regulations in response to cybersecurity problems (Miao, Jiang, & Pang, Reference Miao, Jiang and Pang2021, p. 2004). These laws and regulations were nevertheless insufficient in dealing with the increasing challenges facing cyberspace. Against this backdrop, China formally introduced the Cybersecurity Law of the People’s Republic of China (“the Cybersecurity Law” in short), which came into effect on June 1, 2017. Accordingly, the time span for the data search and collection was set after June 1, 2017, the date of the enactment of the Cybersecurity Law, until May 1, 2023. We used the keyword-screening method in the Huike News Database (WiseNews, http://wisesearch.wisers.net.cn/), the most professional Chinese media content database. The keywords “网络安全 (cybersecurity)” and “互联网安全 (internet security)” were used in all fields to locate news articles covering issues related to cybersecurity, which yielded a total of 6,362 news articles and commentaries (1,268 pieces in 2017; 1,196 in 2018; 1,056 in 2019; 814 in 2020; 962 in 2021; 748 in 2022; and 318 in 2023 through the end of April).
Data Cleaning and Analysis
The next step was to conduct semantic network analysis and explore the discursive network of cybersecurity in news coverage through the following four steps.
• The corpus of 6,362 news articles and commentaries was first preprocessed and cleaned. Raw texts were segmented into words using the Chinese lexical analyzer Jieba. Punctuation, numbers, common Chinese stop words, and nonwords were then filtered.
• Second, the corpus consisting of space-spliced words was submitted to a Python script that counted the frequency of each word and the co-occurrence of word pairs. Word pairs with a raw co-occurrence frequency higher than five were retained for further analysis, following the suggestion of Church and Hanks (Reference Church and Hanks1990), who noted that the mutual information score becomes unstable and meaningless when the count is smaller than five.
• Third, the semantic network of news coverage on cybersecurity was visualized using Gephi (https://gephi.org/). The ego-network of the terms “cybersecurity” and “internet security” were identified and extracted, given that we focused on the semantic meaning and narrative strategies of the term “cybersecurity.”
• Fourth, the modularity partition algorithm in Gephi (Newman, Reference Newman2006) was employed to detect concept communities for the semantic networks. The generated semantic network identified cybersecurity-related concepts in the text data as nodes linked together by the frequencies with which each concept co-occurred with other concepts.
Mapping Out Semantic Networks about “Cybersecurity” in News Narratives
There are 41,053 words and 578,008 edges in the whole news articles’ semantic network. The semantic networks for Chinese news articles were visualized using Gephi (Bastian, Heymann, & Jacomy, Reference Bastian, Heymann and Jacomy2009). We then filtered the ego-network for the term “cybersecurity” that consists of 990 words (2.4 percent of the total number of words) and 84,801 edges (14.7 percent of the total number of edges) (Figure 12.1). We ran the average degree to calculate the node strength for each word, which is determined as the number of edges that are incident on that node; the average degree (85.658) refers to the average number of edges per node. Each node represents a word, and the size of the label indicates the node strength, which is calculated by summing the weights of the edges belonging to the node. Edges are undirected and weighted. The modularity partition algorithm suggested that the network could be divided into eight communities, with a modularity score of 0.231.

Figure 12.1 Ego-network of cybersecurity in news coverage in the Chinese mainland, 2017–2023
We identified topics for each word community by their top words and inductively summarized those topics into different themes. Six main clusters could be identified from the semantic network, in ranking order: control and contestation, development and collaboration, personal information, infrastructure, governing actors, and mass media. Figure 12.1 presents the themes and top words for the semantic networks.
Control and contestation (Figure 12.2) is the largest cluster (283 nodes, or 28.6 percent of the total words) in the semantic network. Within the cluster, the word “cybersecurity (网络安全)” is associated with “中国 (China),” “国家nation(al),” “安全 (security),” and “主权 (sovereignty).” Terms such as “党 (the CPC, i.e., the Communist Party of China),” “习近平(Xi Jinping),” “法治 (rule of law),” “领导 (leadership),” “统一 (unity),” “意识形态 (ideology),” and “防火墙 (firewall)” encapsulate control over cybersecurity. The contestation of the term “cybersecurity” is specifically delineated through the use of words such as “保障 (safeguard),” “风险 (risk),” “维护 (maintain),” “防控 (prevention and control),” “挑战 (challenge),” “破坏(undermine),” and “威胁 (threat).” Threats to cybersecurity are epitomized through specific terms such as “恐怖主义 (terrorism),” “反恐 (anti-terrorism),” and “反间谍(counterintelligence).” Notably, this cluster is the only one that includes foreign entities, such as “美国 (the USA),” “欧盟 (European Union),” “美方 (the US),” “英国 (Britain),” “俄罗斯 (Russia),” and “乌克兰 (Ukraine).”

Figure 12.2 Control and contestation cluster
The second largest cluster, development and collaboration (Figure 12.3), involves 258 nodes (or 26.1 percent of the total words) that address cybersecurity in concert with economic development, technological innovation, and international collaboration. This cluster involves key terms related to the theme of development, such as “发展 (development),” “加强 (strengthen),” “创新 (innovation),” “推进 (improve),” and “完善 (enhancement),” while specifying different “domains (领域),” including “经济 (economy),” “科技 (technology),” “产业 (industry),” “政策 (policy),” and “战略 (strategy).” It also contains words describing the scope of “collaboration (合作)” in cybersecurity, such as “国际 (international),” “全球 (global),” “世界 (world),” “各国 (different countries),” “地区 (regional),” and “双方 (bilateral).”
Next is the personal information cluster (Figure 12.4), with 246 nodes (or 24.8 percent of the total words), which in essence discusses issues related to personal “data (数据)” in business. Terms such as “企业 (enterprise),” “平台 (platform),” “政府 (government),” “法律 (law),” “用户 (user),” “行业 (industry),” “公司 (company),” “未成年人 (juvenile),” “个人 (individual),” and “消费者 (consumer)” allude to various actors involved in the processes of, for instance, “管理 (manag[ing]),” “保护 (protect[ing]),” “规范 (standardiz[ing]),” and “监管 (supervis[ing]).”

Figure 12.4 Personal information cluster
Following the personal information cluster, the infrastructure cluster (Figure 12.5), with 109 nodes (16.5 percent of the total words), addresses aspects of technological and infrastructural applications in cybersecurity with keywords such as “建设 (construction),” “服务 (service),” “技术 (technology),” “系统 (system),” “人工智能 (AI),” “智能 (intelligence),” “基础设置 (infrastructure),” and “智慧 (smart).”
The governing actors cluster (Figure 12.6) consists of fifty-five nodes (5.6 percent of the total words) that allow us to pin down concrete agencies involved in cybersecurity issues. More specifically, the included words identify the national- and local-level actors and agencies that handle cybersecurity, including, for instance, “中央 (central government)”; different levels of “人民政府 (people’s government[s]),” “有限公司 (Company Limited),” “公安部 (Ministry of Public Security),” and “网信(办) ([office of ] Cyberspace Affairs Commission)”; municipal- and provincial-level public security bureau such as the “北京市公安局 (Beijing public security bureau)” and “信息化(办) ([office of] informationalization)”; and public health agencies (“卫生 [public health]” and “健康 [health]”). Most importantly, a critical group of governing actors are the various levels and divisions of party organizations, such as the “省委 (provincial CPC committee),” “市委 (municipal CPC committee),” “书记 (party secretary),” and “宣传部 (propaganda department),” to mention just a few. In other words, these terms signify the essential role of the CPC as well as its in-depth, multidimensional control in the governing of cybersecurity.

Figure 12.6 Governing actors cluster
The three remaining clusters (Figure 12.7), with a total of thirty-seven nodes (3.7 percent of words), involve words that narrate the relationship between news industries and cybersecurity, as demonstrated in terms such as “媒体 (media),” “要闻 (hard news),” and the names of news organizations, including “新华社 (Xinhua News Agency),” “人民日报 (People’s Daily),” “新京报 (Beijing News),” “环球时报 (Global Times),” and “南方都市报 (Southern Metropolis Daily).”

Figure 12.7 Mass media cluster
Domesticating Cybersecurity
Although extant scholarship has presented an overall picture of internet governance and policies in China (e.g., Miao, Jiang, & Pang, Reference Miao, Jiang and Pang2021; F. Yang & Mueller, Reference Yang and Mueller2014), few studies have yet analyzed how those policies and regulations that cater to specific subject matter have been constructed and appropriated for the national context of the audience – that is, the domestication that this study examines. In particular, our research reveals that the domestication of cybersecurity in mainland China highlights the CPC’s pivotal role in controlling, safeguarding, and advancing cybersecurity measures against the supposedly malicious cyber activities of the democracies. Such domestication not only reinforces the CPC’s comprehensive control over cybersecurity but also frames democracies as the source of unfounded cyber threats to China’s national and societal well-being. By doing so, we recognize the cultural construction of cybersecurity “through which common sense understandings are constructed and the foundations are laid” (Bernal, Reference Bernal2021, p. 612) for national consciousness of cybersecurity. This study, for the first time, fills the gap by tracking the narrative and discourse on “cybersecurity” in news coverage in the Chinese mainland since the enactment of the Cybersecurity Law in 2017.
Similar to the contested discourse on cybersecurity elsewhere – for instance, the discourse in the United States uses metaphors of cybersecurity as a war and makes analogies to the Cold War between the United States and China (e.g., Bernal, Reference Bernal2021; Lawson, Reference Lawson2012; also see the Swedish case in Boholm, Reference Boholm2021) – the media narrative and frame in the Chinese mainland underlines contestations between China and the democracies – that is, the United States and beyond – as a key part of domestication. The most striking feature is that the domestication of cybersecurity in the Chinese context reiterates the CPC’s domination at the heart of the management, operationalization, and development of cybersecurity and, further, national security. By domesticating cybersecurity to the leadership of the CPC and ascribing cyber threats involving cyberattack and cyberterrorism to democracies, the Chinese regime not only legitimizes and consolidates its control over cyberspace but also utilizes cybersecurity to place blame on the democracies.
Our question asks how news coverage domesticates “cybersecurity” in the Chinese mainland. The findings revealed four features. The first, and most prominent one, is the political imperative of highlighting, ensuring, and protecting the dominant role of the CPC in decision-making, developing, maintaining, and executing cybersecurity issues. As illustrated in the analysis, two of the six clusters, or themes – the control and contestation and the governing actors’ clusters – highlighted the hegemonic role of the CPC in cybersecurity. More specifically, in the control and contestation cluster, the CPC, together with the names of the leadership like President Xi Jinping, are articulated as the pivotal actor(s) in safeguarding national sovereignty, societal security, and rule of law and in countering cyber threats from the West. Terms such as “保障 (safeguard),” “强化 (reinforce),” “维护 (maintain),” and “统一 (unit)” further legitimize the hegemonic role of the CPC in cybersecurity through the evaluation of government policies dealing with cyber threats and challenges.
While the CPC’s domination stays as the foundation of cybersecurity in China (as articulated in the control and contestation theme), the governing actors theme adds nuance to the multiplicity of agencies involved in the operationalization of cybersecurity issues. Here, the complexity has a two-fold meaning. For one thing, it refers to the engagement of multisectoral, or what Liang and Lu (Reference Liang and Lu2010) describe as the “multidimensional regulatory system” – that is, multiple government departments and agencies, such as public security, public health, Cyberspace Affairs Commission, but also the businesses (as illustrated in “有限公司 [Company Limited]”) that establish comprehensive control over the commercial, public interest, and social dimensions of cybersecurity in China. For another thing, and more importantly, multiplicity indicates the essential involvement of the CPC organizations at all levels – from central to municipality and provincial and from the CPC committee to its propaganda department – in the management of cybersecurity.
Second, as signified in the control and contestation theme, the domestication of cybersecurity capitalizes on blaming cyber challenges and threats on democracies, including the United States, the European Union, and Japan, among others, to promote antagonism toward these countries. As shown in the control and contestation theme, the use of terms such as “terrorism,” “counterterrorism,” and “counterintelligence” is quite common in the narrative of cybersecurity. Moreover, the term “force (势力)” – quite often being adopted together with “foreign (境外)” and “hostile (敌对)” to launch allegations of foreign interference – has frequently been used by domestic media as flag-waving, despite the lack of real instances of external meddling. In other words, the use of cyber threat-related terms, alongside vague references to democratic countries such as the United States and Japan, indicates that a narrative of foreign interference has become a regular feature of cybersecurity in the media discourse, with a wide range of (unspecified) individuals, groups, and countries denounced as counterparts to underscore the tensions between China and the democracies and to facilitate antidemocracy sentiment and assertive nationalism, both online and offline (e.g., Lehman-Ludwig et al., Reference Lehman-Ludwig, Burke, Ambler and Schroeder2023).
The first two features encapsulate what scholars observe as the reemergence of ideology (which is also a keyword in the control and contestation theme) in China’s development and outreach, which has dramatically shaped its internet policy. Ideology is, as Pieke (Reference Pieke2012) points out, “an indispensable aspect in the creation of regime support, no longer intending to generate ‘belief’ in the party, but to cultivate responsible, trusting, and ‘high-quality’ citizens who inhabit an active, autonomous, and governable society” (p. 150). The ideological turn and its further “marriage” with digital technologies exemplify not only the effort expended “on channeling and containing Internet expression through ideological work and cultural governance more broadly” (G. Yang, Reference Yang and Mueller2014, p. 112), as we can observe in other initiatives, such as “Telling China’s Story” to shape global narratives (Huang & Wang, Reference Huang and Wang2019). It further involves “an increasingly visible ideological thread vying to give coherence to an expanding system of Internet control” (G. Yang, Reference Yang and Mueller2014, p. 109), which promotes domestic ideology on a global scale via the internet (Martin, Reference Martin2021) and drives nationalistic sentiments, including a “wolf-warrior diplomacy” that seeks to aggressively defend China’s national interests (Zhu, Reference Zhu2020) and a series of nationalistic portrayals in Chinese cinema to convey the ideologies of the Chinese government and epitomize the state’s changing foreign policies (X. Yang, Reference Yang2023). In the case of cybersecurity and politics, the reemergence of ideology, with the risk of overreaction and arbitrariness, serves as a strategy to defend the CPC’s hold on power and overarching rule, leading to a state of hypervigilance with wide-reaching effects on China’s domestic and, especially, international policies. Such a reemergence is further illustrated in, for instance, China being a major advocate for cyber sovereignty in recent years (e.g., Hong & Goodnight, Reference Hong and Goodnight2020; Zeng, Stevens, & Chen, Reference Zeng, Stevens and Chen2017).
Third, the domestication of cybersecurity also points to both the global development of and international collaboration on cybersecurity issues (the development and collaboration cluster) and personal data security, mostly in business but also beyond it (the personal information cluster) (see similar discussions in Kuner et al., Reference Kuner, Svantesson, Cate, Lynskey and Millard2017). Nevertheless, both narratives – through the keywords identified in the semantic network – remain general (but vague) and thereby have not made their way into the dominant discourse of the domestication of cybersecurity. For instance, despite the emphasis on strengthening international collaboration on cybersecurity, no specific country or international organization was named in the development and collaboration cluster, which implies that this initiative might be more of a political slogan than evidence-based policymaking.
Fourth, although the narrative of cybersecurity further includes topics such as technological and infrastructural applications (the infrastructure cluster) and media relationships, these topics that are crucial to cybersecurity elsewhere (e.g., Haber & Zarsky, Reference Haber and Zarsky2016) occupy rather marginal positions in the semantic network and thus in the domestication of cybersecurity in the Chinese mainland, as displayed in the number of words in the cybersecurity’s ego-network.
In summary, our analysis brings to the forefront the discursive strategies used in news coverage related to the domestication of cybersecurity in mainland China, which seek to validate and strengthen the CPC’s authority and control over cybersecurity. This narrative often portrays Western democracies as the culprits behind cyberattacks, posing a significant threat to China’s national and societal security. Consequently, our analysis underscores the importance of understanding the regime’s cyber policies, going beyond the call to dismantle the Great Firewall or circumvent censorship. It highlights the unique role of mass media in shaping the domestication of cyber policies within the broader context of national discourse and identity.
Conclusion
This chapter enriches extant scholarship on internet governance and policies by focusing on media narratives on cybersecurity in authoritarian China. It delineates the specific way in which the term cybersecurity has been implicated and domesticated in local politics since the enactment of the Cybersecurity Law in 2017. The findings indicate that, although China was initially “a latecomer on the global cybersecurity scene” (Qi, Shao, & Zheng, Reference Qi, Shao and Zheng2018, p. 1343), the regime has strategically crafted and propagated cybersecurity narratives to resonate ideologically with the domestic audience, notably by portraying democracies as sources of cyber threats, including cyberterrorism. This domestic framing of cybersecurity serves to reinforce the CPC’s control over the internet and engenders anti-Western and anti-democratic sentiment. This study highlights at least two further implications. First, the concept of domestication could reconcile the false binary between inward- and outward-focused internet policies (e.g., X. Fu, Woo, & Hou, Reference Fu, Woo and Hou2016), recognizing their interdependent nature as shaped by domestic narratives. Second, this lens reveals a significant shift in discourse over a decade, from sporadic mentions of cyber warfare (Cai & Dati, Reference Cai and Dati2015) to positioning cybersecurity as a critical issue that, if mismanaged, poses a threat to both national security and internal stability, including terrorism and territorial disputes in domestic Chinese politics. Recognizing this evolution is essential for a comprehensive understanding of China’s ascent and its approach to global cyber governance.
China’s Cyber Warfare against Taiwan
Taiwan, officially known as the Republic of China (ROC), is a self-governing democracy situated across the Taiwan Strait from the People’s Republic of China (PRC) (“Taiwan: Political and Security Issues,” 2023). Taiwan stood as an exemplar during the global “third wave” of democratization in the 1980s and 1990s, successfully transitioning from an authoritarian regime to a representative electoral system in a gradual and peaceful manner (Diamond, Reference Diamond2009; Myers & Chao, Reference Myers and Chao2003). At the heart of Taiwan’s democratic system lies its electoral process, through which 23 million citizens exercise their right to vote in the selection of presidents, Congress members, and local government officials (Fell, Reference Fell2018). As described by Freedom House, “Taiwan’s vibrant and competitive democratic system has allowed for regular peaceful transfers of power since 2000, and protections for civil liberties are generally robust” (Freedom House, n.d.).
However, the ongoing efforts by the PRC to exert influence over policymaking, media outlets, and the foundational pillars of democracy pose persistent challenges to Taiwan’s democratic system (Freedom House, n.d.). The PRC’s Anti-Secession Law, enacted in 2005, stipulates that non-peaceful means may be employed to protect China’s sovereignty and territorial integrity in the event of Taiwan’s secession or when peaceful unification options are deemed exhausted (Mainland Affairs Council, Republic of China (Taiwan), 2005). In line with this stance, at the Chinese Communist Party (CCP)’s 20th Party Congress in October 2022, the party’s leader, Xi Jinping, emphasized the necessity of unification with Taiwan for the rejuvenation of the Chinese nation and reiterated that the CCP would not renounce the use of force if deemed necessary (Ministry of Foreign Affairs of the People’s Republic of China, 2022). Beyond these official pronouncements, Taiwan has also come under increasing digital assault. In the early months of 2023, it emerged as the most targeted country in terms of cyberattacks, experiencing an average of over 15,000 attacks per second (Fortinet, Reference Fortinet2023). These orchestrated interventions, strategically implemented by China, are intended to systematically undermine civilians’ trust in Taiwan’s democratic processes.
The Chinese Cyber Warfare
China’s cyber warfare against Taiwan is conducted in a systematic and methodical manner. Influenced by the Gulf War, the PRC began establishing digitized forces and researching novel aspects of cyber warfare (劉 & 張, Reference Jiawei and Jiayuan2021, p. 122). In November 1999, the concept of a “cyber army” (网军) was first mentioned in the Liberation Army Daily, becoming a new branch alongside the Army, Navy, Air Force, and Second Artillery Corps (劉 & 張, Reference Jiawei and Jiayuan2021). The People’s Liberation Army (PLA) began the task of establishing “information warriors”(信息战士), with the goal of identifying talent within the information industry across various regions (林, Reference Yingyou2016, p. 59). In 2002, Major General Dai Qingmin, who served as the Director of the Fourth Department (Electronic Countermeasures and Radar) of the General Staff Department of the PLA, revealed in an internal report that the PLA had consolidated ten major patterns of “information warfare”(信息战), with a specific focus on “integrated network-electronic warfare”(网电一体战). This refers to the use of electronic warfare, computer network operations, dynamic targeting, and other methods to disrupt the enemy’s battlefield network information systems that support combat operations and force projection. The PLA believed that achieving electromagnetic superiority during the initial stages of a battle was paramount to ensuring victory on the battlefield (林, Reference Yingyou2016).
The overarching strategy employed by the Chinese cyber army involves the utilization of advanced persistent threats (APTs), which leverages the intricacies of human nature and employs sophisticated “social engineering” tactics (林, Reference Yingyou2013, pp. 102–103). These orchestrated and meticulously planned espionage activities distinguish themselves from conventional cybersecurity attacks. In pursuit of their objectives, attackers navigate through various stages of attack, employing diverse tactics to evade detection. These stages include the establishment of initial footholds, internal network scanning, and lateral movement between systems within the network, all aimed at reaching the ultimate target system. Upon carrying out their malicious activities on the target system, attackers reach a decision point. They may opt to remain within the network, continuing their harmful actions on other systems, or they may choose to exit the system after eliminating any traces, depending on the directives of their funding source. These multistage attacks typically initiate with the infiltration of one of the network’s systems. Subsequently, privilege escalation techniques are executed as needed to reach the ultimate target system, gain access to sensitive systems, and transmit status updates or information back to the attackers’ command and control center (Alshamrani et al., Reference Alshamrani, Myneni, Chowdhary and Huang2019, p. 2). APTs demonstrate the meticulous planning, organization, and coordination among cyber army units (曾, Reference Yuzhen2020, p. 22). They exhibit characteristics of organized crime, often with the backing of adversarial governments, and are particularly adept at concealing their tracks (曾, Reference Yuzhen2020).
Taiwan’s Regulatory Response and Its Limitations
Although Taiwan boasts one of the most liberated online environments in Asia, the proliferation of cyberattacks has prompted the nation to embrace a top-down approach to shaping its cybersecurity policy. The Cybersecurity Management Act (資通安全管理法) serves as the primary legislation for cybersecurity. This is part of the strategy of “cybersecurity-as-national-security strategy” (資安即國安) (國家資通安全辦公室, 2021). The law, passed in 2018, applies to government agencies and specific nongovernmental entities, including critical infrastructure providers, state-owned enterprises, and government-funded foundations (資通安全管理法,全國法規資料庫, 2022). The requirements for government agencies are modeled after the Federal Information Security Management Act of 2004 (FISMA). Moreover, specific nongovernmental entities must develop and implement cybersecurity maintenance plans in accordance with their respective cybersecurity responsibility levels and establish incident reporting and response mechanisms. These guidelines should include references to and recommendations for the relevant requirements under the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 27001 international information security management standard. The broad application of this provision to nongovernmental entities has drawn criticism from scholars and certain members of Congress for its perceived lack of specificity (劉 & 徐, Reference Jingyi and Xu2018, pp. 122–125; “「資通安全管理法」何去何從?,” 2018, p. 158).
To counteract the APTs from China, significant legislative advancements have been made. First, the National Security Act (國家安全法) was updated to include “cyberspace” as the “fifth domain” of national security protection (國家安全法,全國法規資料庫, 2022; 國家安全法異動條文及理由, 立法院法律系統, 2019). This amendment reflects the reality that national security threats have transcended physical boundaries, with organized cyber criminals posing significant risks by targeting and compromising national critical information infrastructure through internet connections. However, the legal definition of cyberspace remains ambiguous, lacking clarity and precision (蔡, 2019, p. 37).
Second, in 2022, the National Communications Commission (NCC) proposed a draft act called the Digital Intermediary Service Act (數位中介服務法) to combat disinformation. The draft legislation included provisions establishing liability for internet intermediaries, modeled after the Digital Service Act. It also includes provisions for an “access restriction order,” inspired by Section 125 of the United Kingdom’s “Draft Online Safety Bill” from 2022 (“數位中介服務法草案總說明,” n.d.). However, the law faced extensive criticisms for infringing on freedom of expression and allegations of unconstitutionality (陳, Reference Chengliang2023). In response, the NCC announced its decision to refer the entire draft back to the internal digital convergence working group for thorough deliberation and examination of the contentious issues. (Shan, Reference Shan2022)
Third, in consideration of data breaches involving nongovernmental organizations, the Administrative Yuan passed an amendment to the Personal Data Protection Act (個人資料保護法) (新聞傳播處, 2023). The proposed legislation establishes an autonomous entity dedicated to data protection and significantly raises the penalty (up to NT$10 million) for private enterprises that fail to ensure the security of personal data. According to the National Development Council (國家發展委員會), this is part of the key strategies outlined in the “New Generation Anti-Fraud Strategy Action Plan” by the Executive Yuan, which focuses on strengthening the cybersecurity obligations of all stakeholders involved under the “Prevention of Fraud” initiative (新聞傳播處, 2023).
Taiwan remains committed to upholding a free and open internet. However, the prevalence of cyberattacks from China has led to the adoption of a top-down approach, involving increased government intervention and the establishment of national boundaries within the internet sphere. Taiwan faces a critical challenge: safeguarding its democratic institutions while navigating the pressures of cyber warfare. This situation reflects broader discussions about the normative behavior of governments and private parties online. This research will explore the underlying norms and metaphors shaping internet governance over time, gaining valuable insights to address the complex challenges of the digital age.
Norm Developments of the Internet
Norms and Metaphors
The internet was conceived in the late 1960s as a groundbreaking project by the U.S. Department of Defense (Leiner et al., Reference Leiner, Cerf, Clark, Kahn, Kleinrock and Lynch1997). Designed to interconnect research institutions and universities, it emerged as a sophisticated network for communication and resource sharing. Since its inception, the internet has undergone remarkable transformations, becoming the ubiquitous global network that defines our modern era. One of the internet’s most important features is its ability to construct virtual reality, which has sparked a wide range of perspectives (Kerr, Reference Kerr2003, p. 357). Building on these perspectives, scholars have increasingly emphasized the idea of a “norm” in the digital context, focusing on how it is defined and established within the internet.
According to political scientist Martha Finnemore and Kathryn Sikkink, a norm is “a standard of appropriate behavior for actors with a given identity” (Finnemore & Sikkink, Reference Finnemore and Sikkink1998, p. 891). The discourse surrounding norms in the digital realm is commonly referred to as “cyber norm,” with its primary focus on defining the expected conduct of governments in relation to global security and the stability of the internet (Finnemore & Hollis, Reference Finnemore and Hollis2016). A norm life cycle consists of three stages: norm emergence, norm cascade, and norm internalization. Norm emergence may potentially result in a norm cascade, once the tipping point has been reached, which is then followed by the norm’s internalization. The fundamental mechanism of the initial stage, norm emergence, entails the persuasive efforts of norm entrepreneurs who strive to sway a critical mass of states toward the adoption of novel norms. This process relies on the art of persuasion and the strategic influence wielded by these entrepreneurial actors to foster the acceptance and endorsement of emerging norms (Finnemore & Sikkink, Reference Finnemore and Sikkink1998, p. 897). In short, the development of norms is linked to the active engagement of norm entrepreneurs.
At its core, the development of norms in internet governance revolves around the metaphors promoted by norm entrepreneurs. These metaphors not only present legitimate regulatory functions and drive policy changes but also possess remarkable cognitive power, as they help us conceptualize complex ideas or phenomena (Frischmann, Reference Frischmann2007; Hunter, Reference Hunter2003). They serve as bridges between the technical complexities of the internet’s infrastructure and the sociopolitical dynamics that shape its governance. Moreover, metaphors encapsulate specific visions of the internet, rendering the abstract realities of the digital world more tangible and relatable.
In the subsequent sections, I will analyze the two influential norm entrepreneurs in the digital realm – the United States and China – to glean perspectives on the evolution of behavioral norms and their significant contributions to this process. I will specifically explore how these norm entrepreneurs have used two distinct metaphors to shape internet policies. This analysis could offer valuable insights into the governance of international relations, where the interplay of law and politics influences the actions of states and other relevant entities (Finnemore & Sikkink, Reference Finnemore and Sikkink1998, p. 916).
The United States and Cyberspace
The US’ norm development was influenced by the “cyberspace” metaphor, popularized by William Gibson, a science fiction novelist, to describe a new place created by worldwide networks (Hunter, Reference Hunter2003, p. 441; Lemley, Reference Lemley2003, p. 524). According to traditional cyberlibertarianism, cyberspace is its own entity and therefore not subject to territorial regulation (Barlow, Reference Barlow1996). This vision depicted a free and open place where “land could be taken, explorers could roam, and communities could form with their own rules” (Hunter, Reference Hunter2003, pp. 442–443). Notably, David Johnson and David Post argued in a 1996 article that cyberspace should be left to develop its own self-regulatory structure, as there was no longer an obvious method to connect an electronic transaction communication to a particular nation-state jurisdiction (Johnson & Post, Reference Johnson and Post1996, p. 1367). Among the various proposals on self-governance, some theorists believe that online transactions should be governed by norms similar to those in the Lex Mercatoria – a set of norms that governed merchant transactions in medieval times (Hardy, Reference Hardy1994, pp. 1015–1025; Hunter, Reference Hunter2003, p. 448; Perritt, Jr., Reference Perritt1997, p. 461–463; Reidenberg, Reference Reidenberg1998, p. 553).
While the internet has never been as independent or sovereign as early idealists believed, the “cyberspace” metaphor strongly influenced the US legal academic discourse, judicial pronouncements, and legislative enactments (Hunter, Reference Hunter2003, pp. 446–447). The concept of cyberspace as an unrestricted virtual realm is deeply rooted in the American philosophy of free speech (Bradford, Reference Bradford2023, pp. 33–68; Lessig, Reference Lessig2000, p. 6). It embodies the conviction that individuals possess the liberty to articulate their thoughts without censorship or unwarranted intervention, akin to the safeguards enshrined in the First Amendment of the U.S. Constitution (Bradford, Reference Bradford2023, p. 41). Most significantly, the cyberspace metaphor resulted in favoring a bottom-up rather than top-down approach to internet governance. This philosophy contributes to an evolving legal landscape in the United States shaping how online activities are governed and influencing the protection of user data and online security. For instance, the United States lacks a comprehensive cybersecurity law. Meanwhile, US legal discourse on cyber policy has been closely monitoring the density of regulations over the cyber world, with the goal of keeping it free and open (Lessig, Reference Lessig1996, p. 869; Mueller, Reference Mueller2020, p. 779).
The metaphor of “cyberspace” has significantly shaped the trajectory of the US international norm development in cybersecurity, particularly by emphasizing constraints on government actions in the digital domain. Since 2005, the United States has actively engaged in the United Nations (UN) Group of Governmental Experts (GGE), prioritizing the development of responsible state behavior to prevent interstate conflict and limit the use of cyberattacks during cyber conflicts (Lotrionte, Reference Lotrionte2013, p. 75; Mueller, Reference Mueller2020, pp. 786–787). The GGE’s efforts fostered constructive progress that culminated in the formation of a 2013 working group whose participants reached a consensus that the principles of the UN Charter principles, as well as international law, apply to the digital domain.
In 2015, the GGE released a report that acknowledged eleven voluntary norms – a significant milestone in advancing both the understanding of relevant international laws applicable to information and communications technologies (ICTs) and the imperative of safeguarding critical infrastructure (United Nations General Assembly, 2015). These principles were reaffirmed in the 2021 report by the UN Open-Ended Working Group (OEWG), an initiative originally seen as the GGE’s counterpart and sponsored by Russia (Broeders, Reference Broeders2021, p. 278; United Nations General Assembly, 2021).
Overall, the United States has promoted its norms through the “cyberspace” metaphor, emphasizing minimal government intervention in the digital realm and prioritizing voluntary measures to protect free speech. This approach has influenced US cybersecurity policies abroad and has been effectively championed in international forums such as the UN GGE, where similar norms appear in the UN OEWG, the Paris Call, and the Global Commission (“The 9 Principles,” n.d.; The Hague Centre for Strategic Studies, n.d.).
China and Internet Sovereignty
China views the internet as an extension of its territorial sovereignty, shaping its metaphorical perspective on how the internet should be governed (Bradford, Reference Bradford2023, p. 70). This notion was first introduced in the Chinese State Council Information Office’s 2010 publication, The internet in China (Wang, Reference Wang2020, p. 397). The “internet sovereignty of China” in this context refers to the assertion that the internet within Chinese territory falls under Chinese jurisdiction – a significant statement to partitioning the internet along national boundaries. The Chinese government has spearheaded legitimation and adoption of “internet sovereignty” through its 2017 Chinese Cybersecurity Law, representing the nation’s intent to assert control over the internet within its jurisdiction (Creemers, Webster, & Triolo, Reference Creemers, Webster and Triolo2018). Central to this framework are measures such as “public opinion guidance” and requirements for data localization by foreign companies (McKune & Ahmed, Reference McKune and Ahmed2018, p. 3835; E. Wu, Reference Wu2021, p. 1).
Acting as a norm entrepreneur, China has promoted internet content control norms in regional and international institutions under the principle of internet sovereignty. The Shanghai Cooperation Organization (SCO), jointly led by China and Russia, exemplifies this successful multilateral adoption of digital authoritarian norms and practices (McKune & Ahmed, Reference McKune and Ahmed2018, p. 3841). Formed in 2001, the SCO consisted of China, Russia, Kazakhstan, Kyrgyzstan, Tajikistan, and Uzbekistan. Over time, the organization has developed a robust normative framework and gained international prominence. In 2009, SCO member states adopted the Yekaterinburg Agreement, which established core principles for “international information security” and paved the way for proposing an “International Code of Conduct for Information Security” to the UN in 2011 and 2015 (Ministry of Foreign Affairs of the People’s Republic of China, 2011). This Code of Conduct emphasizes sovereignty, territorial integrity, and political independence, urging UN members to refrain from using ICTs to interfere in the internal affairs of other states or undermine their political, economic, and social stability (McKune & Ahmed, Reference McKune and Ahmed2018, p. 3841). Meanwhile, President Xi Jinping underscored the importance of “respect for cyber sovereignty” at the second World Internet Conference (WIC) held in Wuzhen in December 2015 (McKune & Ahmed, Reference McKune and Ahmed2018, p. 3845).
The concept of “sovereignty” has gained traction, resonating not only in authoritarian regimes but also in liberal democracies such as those in Europe (C.-H. Wu, Reference Wu2021, p. 659). The European Union (EU) has begun using the terms “technological sovereignty” and “digital sovereignty,” driven by the aim of enhancing its competitiveness in the digital realm and ensuring economic independence (Burwel & Propp, Reference Burwel and Propp2020, p. 1; European Commission, 2019, p. 3; von der Leyen, Reference von der Leyen2020). This concept encompasses preserving strategic autonomy and safeguarding security interests, as highlighted by the European Commission (European Commission, 2019, p. 3). The Digital Silk Road (DSR) extends the opportunity to propagate similar ideologies to African nations, potentially importing principles of “internet sovereignty.” One noteworthy policy proposition is “data localization,” which rests on government control, self-determined economic advancement, and societal structuring. This idea holds substantial appeal worldwide, extending well beyond nondemocratic governments (Erie & Streinz, Reference Erie and Streinz2021). The internet appears to be moving toward greater balkanization, marked by the creation of national boundaries that restrict the flow of information within specific jurisdictions (Lemley, Reference Lemley2021, p. 1399).
In general, China’s norm development in the cybersecurity realm has advanced significantly, with one notable achievement being the increased awareness among nations of their sovereignty in the digital sphere. As a result, there has been a growing inclination to implement regulations and policies that exercise state control over the internet. By consistently using the term “sovereignty,” President Xi has emphasized the importance of upholding the principle of internet sovereignty, highlighting the need for nations to assert their authority over their respective digital domains and establish governance mechanisms aligned with their national interests.
Taiwan’s Metaphorical Choice: The Internet as Commons
The main implications of the two norm developments, each driven by a distinct metaphor, center on differing cybersecurity strategies. The US conception of “cyberspace” emphasizes voluntary measures, whereas China treats the internet as an extension of its national territory and favors regulatory controls. However, both perspectives face significant challenges. Under the “cyberspace” metaphor, the notion of a free and open internet, while aligned with the original vision of the internet, leaves democratic institutions vulnerable to attacks. As a result, the National Cybersecurity Strategy, published by the White House in March 2023, calls for coherent regulations in critical sectors, signaling a shift toward a model more akin to China’s (The White House, 2023). Meanwhile, the “internet sovereignty” metaphor – by endorsing stringent regulatory measures – risks creating a “splinternet,” characterized by regulatory conflicts and the potential for authoritarian regimes to curtail online freedoms (Lemley, Reference Lemley2021, pp. 1418–1421).
An alternative path for developing internet norms can be fostered by adopting the metaphor of the internet as “commons.” This approach is not without theoretical foundations – scholars and politicians have long used the term “commons” to characterize the internet, with some referencing concepts such as the “global commons,” “semi-commons,” “pseudo commons” or commons within the economic context (111th Congress, 2010; Benkler, Reference Benkler2003; Chertoff, Reference Chertoff2014; Frischmann, Reference Frischmann2013; Hess, Reference Hess1996; Hess & Ostrom, Reference Hess and Ostrom2003; Lessig, Reference Lessig2001; Mueller, Reference Mueller2020; Shackelford, Reference Shackelford2013, Reference Burwel and Propp2020; Shiffman & Gupta, Reference Shiffman and Gupta2013). Embracing the “commons” metaphor to depict the internet offers two primary advantages.
First, the metaphor of the “commons” aptly captures the shared nature of the internet (Hess, Reference Hess1996). The term “commons” generally refers to a resource shared by a group of people subject to social dilemmas (Hess & Ostrom, Reference Hess and Ostrom2007, pp. 3–4). One specific type of shared resource system, known as a “common-pool resource,” combines the subtractability characteristic of private goods with the difficulty of exclusion typically associated with public goods (Ostrom, Reference Ostrom2010, pp. 644–645). Forests, fisheries, and irrigation systems are prominent examples of common-pool resources worldwide (Ostrom, Reference Ostrom2010, p. 645).
Given the internet’s intricacies, the levels of subtractability and exclusivity vary depending on the aspect of the resource system under examination. The exclusion of internet resources is highly fragmented and piecemeal. When many users access the internet simultaneously, congestion can arise, highlighting the issue of subtractability (Hess, Reference Hess1996). However, the attribute of difficulty of exclusion and high subtractability do not fully encompass the diverse resources of the internet. Accordingly, this research focuses on the notion of the “commons” to capture the internet’s shared nature and highlights three shared resources:
The cable commons: The internet relies on a vast physical infrastructure, with a network of high-performance submarine cables carrying 99 percent of global traffic between countries and continents (Takeshita et al., Reference Takeshita, Sato, Inada, de Gabory and Nakamura2019, p. 36). This infrastructure can be referred to as the cables commons, comprising more than 552 submarine cables that span the ocean floor and linking to over 1,300 distinct coastal landing stations (McDaniel & Zhong, Reference McDaniel and Zhong2022; TeleGeography, n.d.). Ownership of submarine cables is heavily concentrated in the private sector, with about 99 percent privately owned by prominent telecom carriers, content delivery providers, and investor groups (Burnett, Reference Burnett2021). While network operators have traditionally been the primary investors, content providers such as Google, Amazon, Microsoft, and Meta have also expanded their investments to ensure seamless interconnection between their data centers (Wall & Morcos, Reference Wall and Morcos2021). Excluding others from using the same submarine cables can be challenging, given their shared nature. Congestion may occur due to high demand, technical issues, underinvestment, or geopolitical factors.
The communications commons: The internet’s most significant accomplishment is its standardized means of communication, enabled by a set of globally accepted protocols. The communications commons refers to the shared, interoperable, and equitable nature of internet communication. This commons is primarily maintained by several organizations, including the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), and Institute of Electrical and Electronics Engineers (IEEE). A foundational conceptual framework for coordinating the development of interconnection standards is the Open Systems Interconnection (OSI) model, developed by the ISO. This model consists of seven abstraction layers, including physical, data link, network, transport, session, presentation, and application layers (ISO, 1994). Each layer represents a specific aspect of the communication process in a computing system. These standards are designed to be open and universal, enabling anyone, anywhere, to build software or hardware that connects to the internet. Because of these open standards and the interoperable nature of the internet, excluding others can be difficult. However, congestion may occur due to network architecture, peak usage, heavy users, or distributed denial of service (DDoS) attacks.
The content commons: The “content commons” refers to the collective body of information and speech that flows across the internet, shaping the virtual world. It encompasses content created by both users and organizations such as governments, corporations, and academic institutions. Much of this commons is hosted on platforms controlled by large US and Chinese corporations – ranging from social networking to search engine services and e-commerce (CompaniesMarketCap, 2023). Within these platforms, diverse social groups establish various sub-commons, each with unique functions, thereby fostering distinct communities. Sharing content on the web remains relatively easy due to low costs, wide reach, and the prevalence of social media platforms. However, congestion can occur because of information overload, the tendency to prioritize quantity over quality, and the lack of effective content discoverability.
Second, Ostrom’s scholarly inquiry into the governance of commons offers profound insights that can significantly enhance internet governance, particularly regarding cybersecurity strategies. Ostrom’s main contribution to the commons theory lies in her formulation of eight institutional design principles, which are strongly tied to the effectiveness of institutions in managing common-pool resources. These principles were the successful commons management systems she identified among the cases examined in Governing the Commons (Ostrom, Reference Ostrom2015). The eighth design principle, which addresses larger and more complex systems of common-pool resources, specifies that various governance activities associated with robust institutions should be organized across multiple layers of nested enterprises. In subsequent work, Ostrom and others have sometimes used the term “polycentric” interchangeably with, or in reference to, the “nested” requirement of the eighth design principle – though polycentricity implies more than nestedness (Carlisle & Gruby, Reference Carlisle and Gruby2017, p. 930).
While there is no unified definition of “polycentric governance,” at the heart of nearly every discussion is the notion of multiple centers of decision-making, where none of them has ultimate authority for making collective decisions (Stephan, Marshall, & McGinnis, Reference Stephan, Marshall, McGinnis, Thiel, Garrick and Blomquist2019, p. 31). This mirrors the structure of the internet, wherein the decision-making centers encompass national authorities, international organizations, private companies, and individuals. Recognizing this, when governance systems are structured in a polycentric manner – from the smallest to the largest scales – they become capable of addressing collective action problems across multiple levels (Ostrom, Reference Ostrom, Brousseau, Dedeurwaerdere, Jouvet and Willinger2012, p. 107). This arrangement inherently fosters the development of norms among participants through a variety of mechanisms, all interconnected and mutually reinforcing. It emphasizes the potential for effective cooperation, conflict resolution, fruitful competition, and shared learning (Bruns, Reference Bruns, Thiel, Garrick and Blomquist2019, p. 237). As a result, polycentricity highlights the organic and self-organizing nature of internet governance – where multiple centers of decision-making interact and collaborate – and stands in contrast to a purely market- or state-centric approach (Stephan, Marshall, & McGinnis, Reference Stephan, Marshall, McGinnis, Thiel, Garrick and Blomquist2019).
The Taiwan–China conflict illustrates the clash between two different internet metaphors. The US concept of “cyberspace” encourages minimal regulatory intervention and is less effective in mitigating cyberattacks, whereas China’s “sovereignty” perspective prioritizes stricter regulations and enhanced security, often at the expense of freedom of expression. Embracing the metaphor of the “internet as a commons” offers Taiwan valuable insights for formulating its cybersecurity strategy. It directs the attention toward identifying shared resources and decision-making centers within the internet. This metaphor serves as a critical tool for interpreting, shaping, and navigating the intricate landscape of policies and protocols that underpin Taiwan’s cybersecurity frameworks. It also promotes meaningful dialogue among diverse stakeholders – policymakers, technologists, academics, civil society, and businesses – that often transcends disciplinary and cultural boundaries. By adopting this approach, Taiwan gains a strategic advantage without veering toward an overly idealized “cyberspace” metaphor or fully embracing “internet sovereignty.” Individuals can safeguard and coordinate the management of shared resources without relying on centralized rulemaking, thereby enhancing security and fostering norm development (Shiffman & Gupta, Reference Shiffman and Gupta2013, p. 100).
Defending Taiwan’s Democracy in the Internet Commons under Polycentric Governance
Tragedy of the Internet Commons
V. Ostrom, Tiebout, and Robert Warren (Reference Ostrom, Tiebout and Warren1961) introduced the concept of polycentricity in their endeavor to ascertain the nature of activities undertaken by numerous public and private agencies involved in the provision and production of public services within metropolitan areas (Carlisle & Gruby, Reference Carlisle and Gruby2017, p. 928). V. Ostrom’s idea of polycentricity goes beyond specific domains and encompasses various aspects of societal organization, including economic markets, legal systems, scientific disciplines, and multicultural societies. In the realm of politics, federalism stands as a key example of polycentricity (Stephan, Marshall, & McGinnis, Reference Stephan, Marshall, McGinnis, Thiel, Garrick and Blomquist2019, p. 24). As mentioned earlier, Elinor Ostrom adopted the term in her work on governing the commons, making it a central pillar of the Bloomington School of Political Economy (Carlisle & Gruby, Reference Carlisle and Gruby2017, p. 930).
The main purpose of Ostrom’s polycentric governance is to mitigate the risk of the “tragedy of the commons” (Hardin, Reference Hardin1968). In valuable open-access resources, the absence of an effective governance regime – either by involved parties themselves or external authorities – may lead to suboptimal outcomes. As internet usage increases, it introduces more threat vectors and provides malicious actors with an expanded range of networks to target, creating a scenario akin to the “tragedy of the commons.” While the broader China–Taiwan conflict resembles a vibrant threat system, the emergence of APTs and activities such as undersea cable disruptions, DDoS attacks, and disinformation campaigns within the internet context are analogous to overexploitation of resources across different internet commons. These developments present a collective action problem that falls within the realm of classic social dilemmas (Shackelford, Reference Shackelford2013, p. 1293).
Tragedy of the Cable Commons
The tragedy of the “cable commons” refers to intentional interference with submarine cables and their use as tools for intelligence gathering. Despite their pivotal role in the digital economy, submarine cables remain surprisingly vulnerable, and the regulations governing their security are antiquated (McDaniel & Zhong, Reference McDaniel and Zhong2022). The governance of the cable commons occupies a gray area, which some phrase as “the orphans of international law” (Beckman, Reference Beckman, Burnett, Beckman and Davenport2014, p. 281). Relevant conventions, including the 1884 Convention for the Protection of Submarine Telegraph Cables, the 1958 Convention on the Continental Shelf, and the 1982 United Nations Convention on the Law of the Sea (UNCLOS), only provide a limited degree of peacetime protection for submarine cables. Their applicability during times of conflict, however, remains contested (McDaniel & Zhong, Reference McDaniel and Zhong2022).
Intentional interference can originate from state or non-state actors, serving various objectives. These may include disrupting military or government communications in the early stages of a conflict, cutting off internet access for a targeted population, sabotaging economic competitors, or causing economic disruptions for geopolitical reasons (Davenport, Reference Davenport2015; Wall & Morcos, Reference Wall and Morcos2021). Taiwan is connected by fifteen submarine cables (McDaniel & Zhong, Reference McDaniel and Zhong2022). This network – protected by advanced encryption – has landing points in Toucheng, Taiwan; Baler, Philippines; and El Segundo, California, yet it poses a significant vulnerability to the nation’s cybersecurity. For instance, in February 2023, two submarine cables connecting Taiwan and Matsu were severed, disrupting internet access for residents in Matsu Island. In an invasion scenario, beyond physically cutting these cables, China could deploy submarines or unmanned underwater vehicles (UUVs) to locate and sever cables, launch cyberattacks that result in data disruption, and use devices that generate electromagnetic pulses (EMPs) to damage cables or their connected infrastructure (陳, Reference Chengliang2023).
It is also possible to tap cables to intercept and steal data for espionage. Edward Snowden revealed that the United States and the United Kingdom have been directly intercepting the internet backbone (Davenport, Reference Davenport2015). Moreover, the need for cable tapping in espionage may become redundant when a state owns the infrastructure. One important feature of the DSR involves Chinese technology firms partnering with non-Chinese counterparts to construct undersea cables (Erie & Streinz, Reference Erie and Streinz2021). China’s ongoing efforts to assert control over various islands in the South China Sea further exacerbate the issue, allowing it to lay its own network cables away from international scrutiny. With ambitions to expand 5G networks – led by Huawei – the Chinese government stands to exert even greater influence over the flow of information entering and exiting the country (Martin, Reference Martin2019).
Tragedy of the Communications Commons
Cyberattacks predominantly take place within the “communications commons” and take various forms. Their primary goal is to disrupt the shared internet communication. A common type is the DDoS attack, which involves flooding the network’s communications channels. In one method, an attacker sends a continuous stream of packets to a target, depleting critical resources and rendering the system inaccessible to legitimate clients. Another tactic uses a few maliciously crafted packets to disrupt an application or a protocol on the victim’s machine, causing it to freeze or require a reboot. Such attacks are feasible because internet security is highly interdependent and internet resources are finite (Mirkovic & Reiher, Reference Mirkovic and Reiher2004, p. 40). Other cyberattacks include malware, phishing, ransomware, spoofing, and eavesdropping (Fortinet, n.d.). One major consequence of cyberattacks is a data breach, in which unauthorized entities gain access to sensitive or confidential information (Kosinski, Reference Kosinski2024).
Taiwan experiences persistent daily DDoS attacks from China. For instance, in 2018, the computer systems of Taiwanese government departments were subjected to frequent cyberattacks and vulnerability probing, exceeding 10 million incidents per month – more than half of which originated from Chinese information warfare units. In 2022, following a visit to Taiwan by U.S. House Speaker Nancy Pelosi, Taiwan’s Ministry of National Defense reported a DDoS attack that took down its network for about two hours (Miller, Reference Miller2022). Hackers have also targeted the websites of the Presidential Office, Ministry of Foreign Affairs, and Ministry of National Defense.
Additionally, Taiwan has experienced numerous significant data breaches, affecting both government agencies and private sectors across various industries (李, Reference Jiaqi2023). Reports indicate that personal data from key intelligence systems, including the National Security Bureau and Military Intelligence Bureau, have been compromised and traded in overseas markets (李, Reference Jiaqi2023). Also, domestic and international cybersecurity agencies have repeatedly found Trojan programs in devices produced by well-known Chinese smartphone brands. These programs covertly transmit users’ personal information, captured photos, and network communications to specific addresses, enabling surveillance of smartphone users and creating a “cybersecurity black hole” (國防部政治作戰局, 2017).
Tragedy of the Content Commons
The tragedy of the content commons refers to the disruption of a shared content ecosystem. It can arise from “information warfare,” a strategy that involves controlling information to gain a competitive advantage. This strategy encompasses both offensive and defensive operations. Strategy, in this context, refers to the process of planning to achieve national objectives and goals, while operations serve as the bridge between strategic objectives and specific tactics, techniques, and procedures. This linkage is facilitated through information operations (IO). Categories of information used in IO include propaganda, misinformation, and disinformation. Notably, the Russian government has been accused of employing bots to spread disinformation and sow discord in various contexts, including the 2016 US presidential election (Theohary, Reference Theohary2018).
Beijing’s information warfare targeting Taiwan has been focused on promoting the PRC’s model of governance and fostering polarization, aiming to undermine confidence in Taiwan’s democratic process (Faust, Reference Faust2023). Doublethink Lab, a Taiwanese organization researching the impact of digital authoritarianism, has identified several methods used by Chinese actors. These include social media influencers amplifying disinformation from CCP-backed content farms and operations involving collaboration with actors who often recruit Taiwanese agents to carry out influence campaigns originating from the mainland (Lee et al., Reference Lee, Tseng, Kao, Wu and Shen2020, pp. 22–39).
The task of preserving the content commons through regulations has been arduous due to the inherent risk of “collateral censorship” associated with any form of speech regulation (Balkin, Reference Balkin1999, p. 2298). When the state holds one private party, “A,” liable for the speech of another private party, “B,” A has an incentive to avoid any potential liability by restricting even fully protected speech. This dynamic often leaves the content commons largely unregulated, especially in democratic states. Meanwhile, authoritarian regimes have enacted laws to censor online speech, ostensibly in the interest of national security. These measures include dictating the permissible forms of speech on websites, exemplified by the 2017 Chinese Cybersecurity Law (Creemers, Webster, & Triolo, Reference Creemers, Webster and Triolo2018). In summary, striking a delicate balance between protecting free speech and upholding national security remains a significant challenge.
Taiwan’s Polycentric Governance in the Internet Commons
Within the internet commons, global problems involve a diverse array of actors, extending beyond governments to include corporate entities that serve as agents for complex publics and exhibit significantly intricate behavior (McGinnis & Ostrom, Reference McGinnis and Ostrom1992). As mentioned, nearly every discussion of polycentric governance revolves around the concept of multiple “decision-making centers” (Stephan, Marshall, & McGinnis, Reference Stephan, Marshall, McGinnis, Thiel, Garrick and Blomquist2019, p. 31). This concept highlights the distributed nature of governance, where power and decision-making are dispersed across various institutions. Jurisdictions of authority may overlap, with many centers of decision-making operating formally independent of each other (Ostrom, Tiebout, & Warren, Reference Ostrom, Tiebout and Warren1961). The goal of polycentric governance is to facilitate the use of local knowledge and build mutual trust (Cole, Reference Cole2015). Local communities possess the skills, local knowledge, and capacity to overcome many challenges, making it essential to resolve problems as close to these communities as possible. This approach effectively addresses the challenges posed to traditional models of democracy centered on the nation-state (Scholte, Reference Scholte and Kohl2017, p. 167). Polycentricity challenges the belief that either the state or markets alone hold the solution to addressing complex challenges (Shackelford, Reference Shackelford2013, p. 1333). Instead, it creates an effective mechanism for cooperation, coordination, conflict resolution, and the utilization of local knowledge.
Taiwan’s Role as a Public Entrepreneur in the Cable Commons
To effectively address collective action problems, it is critical to foster entrepreneurship and innovation across local, regional, national, and international domains. Taiwan, as a decision-making center, has the potential to act as a public entrepreneur in the cable commons, calling for policy changes on maintaining the cable commons among governments, cable operators, and cable owners. As explained by Elinor Ostrom, “Entrepreneurship is a particular form of leadership focused primarily on problem solving and putting heterogeneous processes together in complementary and effective ways, rather than simply making public speeches and being charismatic” (Ostrom, Reference Ostrom, Brousseau, Dedeurwaerdere, Jouvet and Willinger2012, p. 107). Entrepreneurship can be further understood as “acts performed by actors who seek to punch above their weight,” distinguishing them from those who merely perform their duties and act appropriately (Boasson, Reference Boasson, Jordan, Huitema, van Asselt and Forster2018, p. 119). In relation to the concept of “norm entrepreneurs,” as discussed in international relations theory, both norm and public entrepreneurs serve as agents of change. While the former concentrates on shaping norms and values, the latter seeks to drive policy change.
The governance of the cable commons, as mentioned, is a gray zone that poses significant concerns due to its vulnerability to disruption and penetration. Taiwan’s role as a public entrepreneur in this domain should focus on promoting best practices, including diversification, cable installation, operation and maintenance, information sharing among allies and contingency planning, and the development of a variety of regulatory regimes and international legal frameworks (European Agency for Cybersecurity, 2023). This role is particularly important given the ongoing competition between China and the United States over control of undersea cables.
Specifically, regional and international cooperation on information sharing should be established as a mechanism for interorganizational, intersectoral, and intergovernmental exchanges of data deemed relevant by the sharers for resolving collective action problems (Housen-Couriel, Reference Housen-Couriel, Shackelford, Douzet and Ankersen2022). Taiwan and its allies can formulate joint patrols or task forces with regional partners to share intelligence and coordinate countermeasures. The goal is to strengthen infrastructure by investing in cable armoring, deep burial, and decoy cables, while collaborating with international organizations to establish accountability systems (陳, Reference Chengliang2023).
The Industry’s Role of Fostering a Security Culture in the Communications Commons
In the communications commons, each company within the industry serves as a decision-making center to maintain internet communication by fostering a cybersecurity culture. The concept of cybersecurity culture can be understood as a set of rules regarding best cybersecurity practices expressed through either formal regulations or informal social norms and values. It refers to the norms spanning industries, individuals, governments that promote best cybersecurity practices, given that human error is one of the biggest security threats. Under polycentric governance, companies, as local communities, inherently harbor a wealth of valuable skills, indigenous knowledge, and capacity to surmount multifarious challenges, serving as a decision-making center. Therefore, it is essential to address problems at the community level, recognizing the potential for localized solutions.
The industry can cultivate a cybersecurity culture in two ways: by nurturing it within individual companies and by collectively developing a cybersecurity framework. Fundamentally, security cultures should be rooted in and aligned with the broader organizational culture (Nasir et al., Reference Nasir, Arshah, Ab Hamid and Fahmy2019; Uchendu et al., Reference Uchendu, Nurse, Bada and Furnell2021). Studies have shown that critical factors in fostering security culture include top management support, clear policies and procedures, and information security awareness and training. Specifically, without management support, cybersecurity initiatives may not appear significant to employees when weighed against their daily responsibilities (Uchendu et al., Reference Uchendu, Nurse, Bada and Furnell2021). It is critical to establish a community and an environment of trust to effectively implement and sustain a cybersecurity culture (Batteau, Reference Batteau2011). As mentioned, adversaries utilize APTs to execute precise and covert cyberattacks on organizations, often remaining concealed within enterprise networks for extended periods, sometimes months or even years (Mahmoud, Mannan, & Youssef, Reference Mahmoud, Mannan and Youssef2023). Research indicates that people are often the weakest point in the cybersecurity chain. Internal users may either intentionally disclose sensitive information to external entities or inadvertently provide valuable information to adversaries with sophisticated expertise and significant resources. Fostering a strong security culture becomes a crucial step in raising awareness to reduce the likelihood of APTs (Alshamrani et al., Reference Alshamrani, Myneni, Chowdhary and Huang2019, p. 1873). The ultimate goal is to build a “solid and effective human firewall” (Marotta & Pearlson, Reference Marotta and Pearlson2019, p. 9).
Furthermore, the Taiwanese industry and government can coordinate to establish a cybersecurity framework. Ideally, industry groups most familiar with best practices should be allowed to craft local rules, which can then be augmented and enforced (Shackelford, Reference Shackelford2013, p. 1353). Such a framework can be voluntary, at least in the beginning. One example is the cybersecurity framework developed by the National Institute of Standards and Technology (NIST). In response to Executive Order 13636, NIST employed a year-long process involving active dialogue with multiple stakeholders, establishing a bottom-up approach to cybersecurity (Peng, Reference Peng2018, p. 451; Shackelford et al., Reference Shackelford, Proia, Martell and Craig2015). A voluntary cybersecurity framework for Taiwan should also be created through an inclusive and transparent process, involving stakeholders from the private sector, civil society, and government. This would complement the Taiwanese government’s top-down approach. Different frameworks may address different risks, such as those related to cybersecurity, privacy, and artificial intelligence (AI). By promoting collaboration and drawing on best practices from both the public and private sectors, such an initiative would foster a culture of proactive and voluntary cyber defense measures.
The Community’s Role in Content Moderation in the Content Commons
In the content commons, online communities – including fact-checking initiatives – play a key role in maintaining the shared content ecosystem. Individuals within each community can collaborate to create a network for verifying and disseminating accurate information. The Taiwanese online community, in particular, ought to foster a strong sense of collective responsibility for content moderation, empowering individuals to actively shape and maintain a healthy online environment. Their goal is to counter information warfare at the local level. This approach should center on two key elements: first, fostering both cooperation and competition between professional and crowdsourced fact-checking initiatives and, second, implementing a multilevel governance framework.
First, professional and crowdsourced fact-checking initiatives can be perceived as decision-making centers within the context of polycentric governance. The proliferation of professional fact-checking outlets around the globe has experienced a remarkable growth, increasing from just eleven sites in 2008 to 424 in 2022 (Stencel, Ryan, & Luther, Reference Stencel, Ryan and Luther2023). Taiwan, in particular, boasts a robust environment for fact-checking centers, with organizations such as MyGoPen and Taiwan FactCheck Center. At the same time, Cofacts, a crowdsourced fact-checking center, focuses on local and daily matters, helping to mitigate the effects of everyday misinformation. Research indicates that Cofacts plays a complementary role alongside professional fact-checkers. It leverages the global, cross-language perspectives offered by professional fact-checking organizations while offering faster responses to fact-checking needs (Saeed et al., Reference Saeed, Traub, Nicolas, Demartini and Papotti2022). The overlapping jurisdictions between these two types of fact-checking initiatives are integral to the dynamic of polycentric governance, which guarantees competition and cooperation among themselves (Stephan, Marshall, & McGinnis, Reference Stephan, Marshall, McGinnis, Thiel, Garrick and Blomquist2019, p. 33).
A key aspect of a polycentric system is the concept of “nested enterprises,” where governance activities are organized in multiple layers of related governance regimes. Large tech companies and local administrators can be perceived as part of a nested ecosystem. Given that one of the central challenges within the content commons is the centralization of platforms, an approach to decentralizing platform power involves introducing intermediary layers of local administration. Research suggests several design implications for local administrators – platforms should support them in experimenting with community guidelines, sanctioning criteria, and automation settings; allowing for cross-cutting membership so users can participate in multiple communities; fostering healthy competition; holding decision-makers accountable for poor performance; and providing mechanisms for conflict resolution (Jhaver, Frey, & Zhang, Reference Jhaver, Frey and Zhang2021). This approach could be particularly impactful for local administrators on popular social media platforms in Taiwan, such as Facebook.com, Line.me, Instagram.com, ptt.cc, and dcard.tw. The goal is to involve volunteers of subcommittees in Taiwan to actively enforce platform moderation policies and help establish local community norms.
Conclusion
In conclusion, the development of internet governance has been driven by two distinct metaphors – the “cyberspace” metaphor and the “sovereignty” metaphor. The United States holds significant influence in promoting responsible state behaviors in the UN rooted in the “cyberspace” metaphor. This approach reflects the belief that voluntary measures provide greater flexibility and adaptability to address the multifaceted challenges of cyberspace. In contrast, China has made notable advancements in raising global awareness about the notion of “sovereignty.” This progress has contributed to a growing trend of nations adopting regulations and policies that emphasize state authority and control over the internet.
By embracing the metaphor of the internet as “commons,” the focus of internet governance can be directed toward governing various shared resources, including the “cable commons,” the “communications commons” and the “content commons.” This approach allows Taiwan to navigate the delicate balance between national and private control of the internet to protect its democratic system. Grounded in compelling evidence, Ostrom’s research on commons highlights that individuals from all walks of life possess the ability to voluntarily organize and establish rules to protect shared resources.
Polycentric governance offers a pathway to address the various challenges posed by cyberattacks. This approach acknowledges the benefits and constraints of multilevel regulation, underscores the importance of self-organization, and recognizes the vital role of internet governance from the local level. Taiwan’s efforts to defend its democracy in the digital realm extend beyond its borders, offering a blueprint for other nations navigating similar threats. Ultimately, Taiwan’s success in this endeavor will not only strengthen its own democratic system but also contribute to a more resilient, inclusive, and cooperative global digital ecosystem.


























