8.1 Introduction
Many legal and political commentators dubbed Donald Trump’s false claim that he was the actual victor of the 2020 American presidential election, ‘the Big Lie’.Footnote 1 No matter how he complained and dissembled, he lost. After losing the 2020 election, Trump went on a fundraising binge, asking his supporters to give to his legal defense fund so that he could litigate the results of the 2020 election, which he fraudulently claimed he had won.Footnote 2 According to the House of Representatives’ January 6 Select Committee, this fund did not exist.Footnote 3 As Select Committee member Congresswoman Zoe Lofgren put it, ‘the Big Lie was also a big rip-off’.Footnote 4 Because the 2020 presidential election was not stolen,Footnote 5 and the legal defense fund he touted was nonexistent, Trump’s post-2020 election fundraising was a fraud within a fraudFootnote 6 – giving rise to a reasonable argument that it violated the federal wire fraud statute and also constituted common law fraud.Footnote 7
Wire fraud involves ‘any scheme to defraud another person that uses electronic communications, either across state lines or internationally’.Footnote 8 Since most solicitations for funds in the digital age are done with email or texts to mobile phones using telecommunications infrastructure, the federal wire fraud statute is triggered when such solicitations contain fraud. As Tyler Yeargain once described wire fraud, ‘[p]ut simply, lies were communicated over the Internet in an attempt to yield a financial return for the liars.’Footnote 9 Common law fraud includes that the ‘main purpose of fraud is to gain something of value by misleading or deceiving someone into thinking something which the fraud perpetrator knows to be false. Criminal fraud requires criminal intent on the part of the perpetrator, and is punishable by fines or imprisonment’.Footnote 10
Legal accountability for the attempts to overturn the 2020 election is finally coming home to roost with an indictment for federal crimes from Special Counsel Jack Smith against ex-President Trump and an indictment for Georgia crimes against Trump and eighteen co-conspirators from Fulton County District Attorney.Footnote 11 Both indictments cover expansive ground, ranging from the crimes of pressuring state officials to illegally overturn the 2020 election, to fabricating fake electors, to interfering with Congress’s electoral count. But one charge seemed to be conspicuously absent from both indictments: any charge of fraud for fundraising from the Big Lie (that Trump was the true victor of the 2020 election) and for the Big Rip Off (raising money for an election defense fund which did not exist).
The question of whether to prosecute Trump, his campaign, or his fundraising team for pushing this aspect of the Big Lie is up to Special Counsel Smith, who was assigned by US Attorney General Merrick Garland to look into Trump’s handling of classified and military documents at Mar-a-Lago as well Trump’s actions related to January 6.Footnote 12 The Special Counsel may not pursue these charges because there are easier cases to make, including violations related to conspiracy to have fake electors submit paperwork purporting to be real electors.Footnote 13 But this chapter argues that there were real violations of federal wire fraud statutes surrounding the fundraising after the 3 November 2020 election by Trump and his fundraisers, who were trafficking in disinformation about the outcome of the 2020 election as well as the uses of the money. (Another possible source of accountability is state prosecutors in Fulton County, Georgia, who could prosecute this as common law fraud.) In this chapter, I will outline why Trump’s deceptive fundraising after the 2020 election (including some fundraising that is ongoing while this chapter was written in 2024), is legally fraught.Footnote 14
The chapter argues that when Trump (and his allies) used the Big Lie after the 2020 election to raise money from unsuspecting supporters who trusted him, he (and his allies) likely stepped across the line from protected free speech into criminal fraud, which enjoys no First Amendment protection.Footnote 15 It will proceed as follows: Section 8.2 will explain why this Big Lie that Trump was the true victor of the 2020 election was disinformation. Section 8.3 will demonstrate how federal courts, including the Supreme Court conceptualize fraud as being outside the protection of the First Amendment. Section 8.4 will show that the Department of Justice (DOJ) often uses wire fraud as a charge in cases involving campaign funds. Section 8.5 will argue that Trump, the head of his political action committees (PACs), and even the Republican Party – all of whom used the Big Lie to fundraise – have exposed themselves to possible prosecution for wire fraud. Finally Section 8.6 will follow the money trail, showing where money from the Big Lie was used.
8.2 The Big Lie Was Disinformation
After Trump lost the 2020 election, he, some of his lawyers and certain members of his campaign staff all pushed the disinformation known as the ‘Big Lie’ that Trump was the true victor of the 2020 election.Footnote 16 He was not.Footnote 17 This Big Lie was deployed unsuccessfully in over sixty lawsuits that challenged the legitimacy of the 2020 election in court.Footnote 18 The lawyers who brought these frivolous suits have been subject to sanctions,Footnote 19 calls for disbarment,Footnote 20 as well as withering disapproval from fellow lawyers.Footnote 21 As I explained in a law review article, happily the American judiciary was not deceived by the Big Lie.Footnote 22 But tragically, thousands of Republican donors, including many small donors, did fall for the Big Lie.Footnote 23
While most campaign fundraising activities are enrobed in a thick blanket of First Amendment protections,Footnote 24 the fundraising by Trump and his allies that relied on the Big Lie arguably falls outside of the free speech protections, landing squarely in the category of both common law fraud and federal wire fraud.Footnote 25 While no one has yet been prosecuted for this activity, there are hints in press reporting that Special Counsel Smith may be investigating this aspect of the run-up to January 6.Footnote 26 And press reports indicate that he is looking not just at Trump, but at the Save America PAC, the Make America Great Again PAC, as well as other campaign finance entities such as a Trump lawyer and Georgia co-defendant named Sidney Powell’s group Defend Our Republic.Footnote 27 In Trump’s federal indictment regarding January 6, there are no charges related to fundraising,Footnote 28 but there could yet be a superseding indictment that adds such charges, just as happened in the Mar-a-Lago documents case where additional criminal charges were added later.Footnote 29
Here I use Andrew Guess’ and Benjamin Lyons’ definitions of misinformation and disinformation. Misinformation is false information that ‘contradicts or distorts common understandings of verifiable facts’.Footnote 30 Meanwhile, ‘disinformation is the subset of misinformation that is deliberately propagated. This is a question of intent: Disinformation is meant to deceive, while misinformation may be inadvertent or unintentional.’Footnote 31 The Big Lie as propagated by Trump, his lawyers and his political committees was not just misinformation, rather the Big Lie was a particularly potent and virulent form of disinformation. As scholars and advocates have found, such disinformation can be particularly corrosive in a democracy.Footnote 32
The Special Counsel’s 1 August 2023 indictment against Trump focused on his actions leading up to and on January 6, and mentioned six unindicted co-conspirators, five of whom were lawyers. As the indictment recited:
The Defendant [Trump] lost the 2020 presidential election … Despite having lost, the Defendant was determined to remain in power. So for more than two months following election day on November 3, 2020, the Defendant spread lies that there had been outcome-determinative fraud in the election and that he had actually won. These claims were false, and the Defendant knew that they were false.Footnote 33
The Big Lie was condemned in real time, in November 2020, in ‘[a]n open letter signed by retired federal and state judges, former state attorneys general who served under Republican and Democratic governors, and law professors condemned Trump’s claims of fraud as being presented “without evidence and false”, singling out Republican officials who have publicly supported the president’s efforts to have thousands of ballots thrown out.’Footnote 34 And in 2023 the former assistant director of the FBI Frank Figliuzzi asserted about Trump’s fundraising that ‘[w]hen you raise millions based on a fraudulent claim, you’ve committed a crime. And, you just might have to give those millions back.’Footnote 35 Meanwhile former federal prosecutor Andrew Weissmann indicated that he anticipates ‘a criminal case about the Trump PAC and forfeiture allegations/seizures’.Footnote 36 In other words, if the Special Counsel alleges that the money in Trump PAC(s) are the fruit of a crime, then he could move for pre-trial seizure of those funds.Footnote 37
8.3 The Courts’ Views of Fraud
Trump’s criminal lawyers are already relying heavily on the First Amendment as defense for his actions that are charged in the Special Counsel’s 1 August 2023 indictment about January 6.Footnote 38 But although the Supreme Court has been very lenient on liars, even in the context of elections,Footnote 39 it still maintains that fraud is outside of the ambit of First Amendment protections.Footnote 40 The argument I am advancing here is that just as common law fraud (and wire fraud) is not protected by the First Amendment, raising money for Trump’s Save America PAC (and other political committees) with the Big Lie is not covered by the First Amendment either.
The Supreme Court has long been hostile to individuals who have defrauded others. Dating back to 1820 the Court stated, ‘the first principles of the common law [is that] fraud [is] the object of its peculiar abhorrence, and [fraud] contaminat[es] every act’.Footnote 41 The Court has defined common law fraud as including ‘a scheme to deprive a victim of his entitlement to money’.Footnote 42 In 1976, the Supreme Court concluded in Virginia Board of Pharmacy v. Virginia Citizens Consumer Council, Inc. about commercial advertisements that ‘[u]ntruthful speech, commercial or otherwise, has never been protected for its own sake.’Footnote 43 And in another case the Supreme Court added to this definition, ‘common-law fraud has long encompassed certain misrepresentations by omission, “false or fraudulent claims” include more than just claims containing express falsehoods. The parties and the Government agree that misrepresentations by omission can give rise to liability.’Footnote 44 Thus, according to the Supreme Court, fraud can be both express lies as well as failures to tell key truths. In Trump’s case the express lie was the Big Lie that he won the 2020 election, and the failure to tell the truth was omitting where the money raised by his PACs would really be used.
In a case decided in 2023, United States v. Hansen,Footnote 45 the Supreme Court considered a First Amendment defense raised by an individual who induced immigrants to break US immigration and naturalization laws through fraudulent promises of an easy path to US citizenship. As the Supreme Court explained ‘federal law prohibits “encourag[ing] or induc[ing]” illegal immigration. 8 U.S.C. § 1324(a)(1)(A)(iv) … Properly interpreted, this provision forbids only the intentional solicitation or facilitation of certain unlawful acts. It does not “prohibi[t] a substantial amount of protected speech”.’Footnote 46 Defendant Helaman Hansen sold immigrants the false hope of becoming US citizens through ‘adult adoption’. This path to citizenship is nonexistent. This grifter charged desperate immigrants who wanted to stay in the USA thousands of dollars, making $2 million in the illegal scheme.Footnote 47
When the federal government prosecuted Hansen for this fraud, he defended himself by claiming that the immigration law that was being used against him was constitutionally overbroad and violated his free speech rights.Footnote 48 The Ninth Circuit had agreed with him.Footnote 49 The Supreme Court disagreed in a seven-to-two decision, quoting Illinois ex rel. Madigan v. Telemarketing Associates, Inc., a case about charitable fraud, for the proposition that ‘the First Amendment does not shield fraud’.Footnote 50 In Hansen, the Court explained what counted as criminal solicitation:
Criminal solicitation is the intentional encouragement of an unlawful act. Facilitation – also called aiding and abetting – is the provision of assistance to a wrongdoer with the intent to further an offense’s commission. While the crime of solicitation is complete as soon as the encouragement occurs, liability for aiding and abetting requires that a wrongful act be carried out. Neither solicitation nor facilitation requires lending physical aid; for both, words may be enough.Footnote 51
Hansen’s lawyers argued that criminalizing speech that only induced an immigrant to break a civil law violated the First Amendment (as many violations of immigration law are handled as civil not criminal matters).Footnote 52 But the Supreme Court rejected this distinction, and upheld the challenged law as being constitutional under the First Amendment.Footnote 53 While Hansen is of course focused on immigration law, nonetheless its principles would apply in prosecutions of fraud in other contexts like raising money for an election defense fund which, like ‘adult adoption citizenship’, did not exist.
In 2010 in United States v. Stevens the Supreme Court infamously said that ‘crush videos’ are protected by the First Amendment; nonetheless, in that same case the Court noted that ‘[f]rom 1791 to the present … the First Amendment has permitted restrictions upon the content of speech in a few limited areas … [The punishment of these] historic and traditional categories long familiar to the bar[,] including … fraud … and speech integral to criminal conduct … have never been thought to raise any Constitutional problem.’Footnote 54 The Supreme Court came to a similar conclusion in United States v. Alvarez (better known as the stolen valor case) in 2012, that ‘[w]here false claims are made to effect a fraud or secure moneys or other valuable considerations, say offers of employment, it is well established that the Government may restrict speech without affronting the First Amendment.’Footnote 55
In United States v. Smith, Malcom Smith, a New York Democratic State Senator, and Vincent Tabone, the Queens County Republican Party Vice Chairman, entered into a complicated scheme to allow Democrat Smith to run for the Mayor of New York City as a Republican. They were both convicted of bribery and honest services wire fraud, and Tabone raised First Amendment defenses during his prosecution. The Southern District of New York Judge Kenneth Karas rejected this argument:
[If] Tabone is attempting to raise an as-applied or facial First Amendment challenge to the statute, he fundamentally misconstrues the statute’s thrust. The statute does not criminalize mere association with a political party, or advocacy for certain political candidates. In fact, in this case, the statute is being applied to alleged bribes offered and received in return for certain conduct. Just because this alleged quid pro quo arrangement involved political-party officials, they are not entitled to immunity for their actions under the guise of protected speech.Footnote 56
These appeals failed and both men went to prison.Footnote 57
A 2023 federal district court case reached a similar result in a case involving a scheme to defraud voters. In United States v. Mackey, a man was charged with tricking voters who supported Hillary Clinton to vote online (a voting method which does not exist in the USA).Footnote 58 This is sometimes referred to as the ‘vote by Tweet’ case.Footnote 59 Defendant Douglass Mackey, as known as ‘Ricky Vaughn’, argued that he had the free speech right to say what he said on social media to Clinton supporters. The federal district court in his case rejected this argument, stating: ‘[t]his case is about conspiracy and injury, not speech … [T]he language [Defendant Mackey] used is akin to verbal acts, which fall outside the scope of the First Amendment, rather than protected’ speech.Footnote 60 Mackey was sentenced to seven months incarceration for his crime.Footnote 61 As the Southern Poverty Law Center, which tracks right-wing hate groups, said of the Mackey verdict that it ‘set more clearly visible boundaries around what is legal when sowing disinformation online, prosecutors may have also created inroads to bring about charges similar to those filed against other radical-right activists who engaged in coordinated efforts to shape the outcome of elections’.Footnote 62 Defendant Mackey has appealed his conviction using the argument that his actions in the 2016 election are protected by the First Amendment.Footnote 63 These cases, Hansen, Stevens and Alvarez, at the Supreme Court level and Mackey and Smith at the district court level, all indicate that defrauding individuals is not going to enjoy First Amendment protections.
8.4 Department of Justice Frequently Uses Wire Fraud Charges in Campaign Finance Cases
The Department of Justice has frequently used wire fraud to charge crimes that involve campaign funds. In 2023 alone, the DOJ alleged wire fraud violations in a case ‘charging the [Republican] Sheriff of Culpepper County, Virginia, and three … men with a conspiracy to exchange bribes for law enforcement badges and credentials. According to court documents, from at least April 2019 … Sheriff Scott Howard Jenkins, … accepted cash bribes in the form of campaign contributions totaling … $72,500.’Footnote 64 Additionally, on 9 May 2023, sitting Republican Congressman George Santos was indicted by the DOJ. He was charged with seven counts of wire fraud, three counts of money laundering, one count of theft of public funds, and two counts of making materially false statements to the House of Representatives.Footnote 65 Santos was expelled from the House of Representatives after the House Ethic Committee ‘unanimously concluded that there was substantial evidence that Representative George Santos: knowingly caused his campaign committee to file false or incomplete reports with the Federal Election Commission; used campaign funds for personal purposes; [and] engaged in fraudulent conduct’.Footnote 66
In 2022, a Democratic ex-Congressman was hit with wire fraud charges. As a press release from DOJ stated, a ‘28-count indictment was unsealed … charging a former member of Congress with multiple fraud schemes and campaign contribution fraud. Terrance John “TJ” Cox … is charged with 15 counts of wire fraud, … and one count of campaign contribution fraud.’Footnote 67 The DOJ leveled similar charges against a former Republican Governor of Puerto Rico, Wanda Vázquez Garced.Footnote 68
There have been guilty pleas in cases where defendants have been charged with wire fraud in campaign finance cases. A former Republican congressional candidate, Nicholas Jones, pleaded guilty to wire fraud ‘for falsifying records to conceal thousands of dollars of in-kind contributions by employees in a report to the Federal Elections Commission (FEC)’.Footnote 69 In 2023, the Chair of the Louisiana Democratic Party, Karen Carter Peterson, pled guilty to wire fraud in a case that accused her of defrauding the party and political donors.Footnote 70 Republican ex-Arkansas State Senator Jeremy Hutchinson pled guilty to taking bribes as an elected official to move legislation for a business.Footnote 71 In 2020, Republican then-Member of Congress Duncan Hunter and his wife were accused of wire fraud for the personal use of campaign funds.Footnote 72 Congressman Hunter pled guilty,Footnote 73 and was sentenced to eleven months in prison.Footnote 74 But the Congressman never served jail time because President Trump pardoned him.Footnote 75
Individuals running so-called Scam PACs have also been convicted of wire fraud violations. An example of this from 2020 is a ‘Maryland political consultant [who] was sentenced to three years in prison … for fraudulently soliciting hundreds of thousands of dollars in political contributions through several scam political actions committees (PACs) that he founded and advertised as supporting candidate for office and other political causes’.Footnote 76 In another example, in 2022, a ‘California man pleaded guilty … to conspiracy to solicit millions of dollars in contributions to two political action committees based on false and misleading representation that the funds would be used to support presidential candidates during and after the 2016 election cycle’.Footnote 77 A former Republican candidate for the US House of Representatives, Robert Cannon Hayes, pled guilty and was sentenced for ‘wire fraud and willfully violating the Federal Election Campaign Act (FECA) by operating fraudulent and unregistered political action committees’.Footnote 78 And in 2023, Jack Daly and Nathanael Pendley ‘pled guilty to conspiring to (i) commit mail fraud and (ii) lie to the Federal Election Commission’ because of their actions related to a scam PAC called the Draft PAC.Footnote 79
The Department of Justice has also chalked up wins in front of juries in cases about campaign funds that charged wire fraud. For instance, in 2020, Greg Lindberg and John Gray were convicted in a case of bribing the North Carolina Insurance Commissioner. As Acting Assistant Attorney General Brian Rabbitt said, ‘[w]hen Greg Lindberg and John Gray offered millions of dollars in bribes to the North Carolina Insurance Commissioner, they referred to their elaborately corrupt scheme as a “win–win” – unaware that the FBI was watching and listening’.Footnote 80 In another case, a jury convicted ‘former [Republican] US Representative Stephen E. Stockman for orchestrating a scheme to steal hundreds of thousands of dollars from charitable foundations and the individuals who ran those foundations to illegally finance Stockman’s campaigns for public office and to pay for his and others’ personal expenses’.Footnote 81
All of this is evidence that if the DOJ wants to charge Trump, his PACs or his campaign with wire fraud for defrauding donors of money using the Big Lie, this prosecutorial tool is available.Footnote 82 Another thing to note is that these listed prosecutions have been against both Republicans and Democrats. Moreover, the Trump Administration prosecuted Republicans and the Joe Biden Administration has prosecuted Democrats. So this does not fall into the lazy and inaccurate trope of criminalizing politics, or of an administration going after its political enemies.
8.5 Legal Peril for Trump, Trump Fundraisers and the Republican National Committee
Two investigations have focused on the post-2020 fundraising by team Trump: one by Congress and one by the Special Counsel Smith.Footnote 83 The Save America PAC was created on 9 November 2020, just days after Trump lost the 2020 election.Footnote 84 Trump’s Save America PAC was the primary vehicle for his post-election fundraising.Footnote 85 Super PACs that are nominally independent of Trump have also fundraised on the basis of the Big Lie.Footnote 86 The Select Committee found that Save America PAC along with the Trump campaign raised a quarter of a billion dollars.Footnote 87 Many of its fundraising emails and texts featured the Big Lie prominently.Footnote 88 As the Select Committee detailed:
Evidence gathered by the Committee indicates that President Trump raised roughly one quarter of a billion dollars in fundraising efforts between the [2020] election and January 6th [footnote omitted]. Those solicitations persistently claimed and referred to election fraud that did not exist. For example, the Trump Campaign, along with the Republican National Committee, sent millions of emails to their supporters, with messaging claiming that the election was ‘rigged’, that their donations could stop Democrats from ‘trying to steal the election’, and that Vice President Biden would be an ‘illegitimate president’ if he took office.Footnote 89
As the Select Committee also documented, none of this was true, and multiple people told Trump in real time that there was ‘no there, there’ when it came to massive election fraud in the 2020 election.Footnote 90 The fact that multiple individuals told Trump he was completely wrong about the existence of election fraud featured prominently in his third indictment by Special Counsel Smith for his actions surrounding January 6.Footnote 91 Moreover, some of the post-2020-election Trump fundraising emails indicated that the money would be used for an election defense fund related to the 2020 election litigation for the sixty-plus litigations that he would eventually lose.Footnote 92 The Select Committee investigated the fundraising by Save America PAC and other Trump fundraising entities and discovered that the election defense fund did not exist.Footnote 93 Rather, as the Select Committee explained: ‘Despite what they told their supporters, however, most of their money was not used to stop any purported steal – it was diverted to accomplish the Big Rip-off. Millions of dollars that were raised ostensibly for “election defense” and “fighting voter fraud” were not spent that way at all.’Footnote 94
Finally, in an indication of how bizarre the post-2020 election period has become, there is also potential for wire fraud charges against the Republican National Committee (RNC) itself. Key events happened at RNC headquarters. For example, the following incidents are detailed in Fulton County, Georgia’s indictment of Trump and eighteen co-conspirators to overturn Georgia’s election:
On or about the 19th day of November 2020, Rudolph William Louis Giuliani, Jenna Lynn Ellis, Sidney Katherine Powell, and unindicted co-conspirator Individual 3, whose identity is known to the Grand Jury, appeared at a press conference at the Republican National Committee Headquarters on behalf of Donald John Trump and Donald J. Trump for President, Inc. (the ‘Trump Campaign’) and made false statements concerning fraud in the November 3, 2020 presidential election in Georgia and elsewhere. These were overt acts in furtherance of the conspiracy.Footnote 95
And as the Select Committee found:
Moreover, the Select Committee’s investigation shows that the RNC knew that President Trump’s claims about winning the election were baseless and that post-election donations would not help him secure an additional term in office. Yet, both the Trump Campaign and the RNC decided to continue fundraising after the election, a decision that would have come from President Trump himself.Footnote 96
Thus, if Trump is culpable for defrauding donors by fundraising on the basis of the Big Lie, then so too are the upper echelons of the RNC who did the same thing.
There should be legal consequences for defrauding donors with the Big Lie. The brutal repetition of the Big Lie for years warped Republicans’ and especially Trump supporters’ views of the 2020 election. Years later, many still think there was voter fraud.Footnote 97 And many are still forking out their hard-earned dollars because of the Big Lie.Footnote 98 Incidentally, the RNC is also implicated in a completely different crime of Trump’s fake elector scheme. As the Special Counsel detailed in his 1 August 2023 indictment of Trump:
On December 6 [2020], the Defendant [Donald Trump] and Co-Conspirator 2 [John Eastman] called the Chairwoman of the Republican National Committee to ensure that the plan was in motion. During the call, Co-Conspirator 2 told the Chairwoman that it was important for the RNC to help the Defendant’s Campaign gather electors in targeted states, and falsely represented to her that such electors’ votes would be used only if ongoing litigation in one of the states changed the results in the Defendant’s favor. After the RNC Chairwoman consulted the Campaign and heard that work on gathering electors was underway, she called and reported this information to the Defendant, who responded approvingly.Footnote 99
And then, a week later, the Chairwoman of the RNC sent the names of fake electors who had pretended to vote for him to Trump. As detailed in the indictment, this is what happened:
That evening [14 December 2020], at 6:26 p.m., the RNC Chairwoman forwarded to the Defendant [Trump], through his executive assistant, an email titled, ‘Electors Recap Final’ which represented that in ‘Six Contested States’, Georgia, Michigan, Nevada, Pennsylvania and Wisconsin the Defendant’s electors had voted in parallel to Biden’s electors. The Defendant’s executive assistant responded, ‘It’s in front of him!’
This behavior was also noted in the Fulton County indictment, which stated:
On or about the 8th day of December 2020, Donald John Trump and John Charles Eastman placed a telephone call to Republican National Committee Chairwoman Ronna McDaniel to request her assistance gathering certain individuals to meet and cast electoral votes for Donald John Trump on December 14, 2020, in certain states despite the fact that Donald John Trump lost the November 3, 2020, presidential election in those states. This was an overt act in furtherance of the conspiracy.Footnote 100
At this point, neither the RNC nor its Chairwoman have been charged with any crimes in any venue.
Though the Fulton County Prosecutor did note that one of the overt acts in the overall conspiracy to overthrow the 2020 election in Georgia included:
On or about the 27th day of December 2020, Mark Randall Meadows sent a text message to Office of the Georgia Secretary of State Chief Investigator Frances Watson that stated in part, ‘Is there a way to speed up Fulton county signature verification in order to have results before Jan. 6 if the trump campaign assist financially.’ This was an overt act in furtherance of the conspiracy.Footnote 101
Put another way, the sitting Chief of Staff to the president was offering Trump campaign funds to speed up the vote count in Georgia as part of the alleged criminal conspiracy to overturn the lawful 2020 election. A federal judge has already found that these actions were beyond Meadows’ powers as a federal official because the Hatch Act bars executive officers from interfering or intervening in an election.Footnote 102 Nonetheless, there are thirty unindicted co-conspirators in the Fulton County indictment that could face future prosecution, including for fraudulent fundraising, so this story is still unfurling.
8.6 Where Did the Money Raised Using the Big Lie Go?
Some of the fundraising after the 2020 election using the Big Lie was done through Donald J. Trump for President, Inc. – that is, the Trump 2020 campaign. Even before the physical insurrection at the Capitol, the Select Committee found that Trump and certain of his lawyers and his supporters waged a campaign to get state election officials to overturn the results of the 2020 election. The Select Committee further found that President Trump had approved funding of ads that badgered state lawmakers to overturn the election, including in Arizona:
President Trump personally approved a series of advertisements that the Campaign ran on cable television and social media in several important States. One advertisement in Arizona called for pressure on Governor Ducey … alleging, ‘The evidence is overwhelming. Call Governor Ducey and your legislators. Demand they inspect the machines and hear the evidence.’ … Stand up for President Trump. Call today. Paid for by Donald J. Trump for President, Inc.Footnote 103
Top Trump advisors, including his son-in-law Jared Kushner, were involved in approving the ad campaign to pressure state lawmakers to overturn the results of the 2020 election. As the Select Committee noted:
Trump Campaign Senior Advisor Jason Miller … wrote that ‘the President and Mayor Giuliani want to get back up on TV ASAP, and Jared [Kushner] has approved in budgetary concept, so here’s the gameplan’ in order to ‘motivate the GOP base to put pressure on the Republican Governors of Georgia and Arizona and the Republican-controlled State legislatures in Wisconsin and Michigan to hear evidence of voter fraud before January 6th.’ Miller anticipated a budget of $5 million … On December 22nd, Jason Miller texted Jared Kushner that ‘POTUS has approved the buy’.Footnote 104
This places President Trump, Donald J. Trump for President, Inc., and his key advisors at the center of the effort to illegitimately overturn a democratic election at the state level.
The Select Committee found that the Save America PAC, in turn, gave funds to former White House Chief of Staff Mark Meadows’ charity the Conservative Partnership Institute (CPI).Footnote 105 The CPI then gave $1 million to another group called American Voting Rights Foundation,Footnote 106 which was created two days after the CPI got money from Save America PAC.Footnote 107 Through this daisy-chain of donations, which occurred in 2021, Save America PAC helped fund the bogus effort to audit the 2020 vote in Arizona – which the press derisively called the Cyber Ninjas’ ‘fraudit’, a portmanteau of ‘fraudulent’ and ‘audit’.Footnote 108 As criminal investigations around Trump heated up in 2022 and 2023, Save America PAC spent tens of millions on criminal defense lawyers for Trump and witnesses around him.Footnote 109
Money was diverted from the 2020 Trump Campaign to the Save America PAC. And much of this money went to Trump-affiliated companies. As the Select Committee noted when following the money:
The Trump Campaign spent the money on President Trump, giving donations to his associates, and keeping it for himself in Save America. Hundreds of millions of dollars that were raised to go towards ‘election defense’ and ‘fighting voter fraud’ were not spent that way at all. To the contrary, most of the funds remain unspent, and millions have been paid to companies that are known affiliates of President Trump, or payments to entities associated with former Trump administration officials. Since the election, former Trump officials who are still working for President Trump’s PACs and are publicly receiving salaries as FEC-reported ‘payroll’, are also associated with these companies.Footnote 110
Another entity to consider is American Made Media Consultants, an LLC to which the Trump campaign paid millions throughout the 2020 re-election campaign. This LLC spent hundreds of thousands of dollars on messaging about January 6.Footnote 111 According to the Campaign Legal Center, ‘reporting shows that Jared Kushner approved AMMC’s [American Made Media Consultants’] formation and that its board initially included members of the former president’s and former vice president’s families who also held senior roles with the Trump campaign.’Footnote 112 The Select Committee summed up: ‘After raising $250 million dollars on false voter fraud claims, mostly from small-dollar donors, President Trump did not spend it on fighting an election he knew he lost. Instead, … President Trump got a war chest with millions of dollars, and the American people were left with the U.S. Capitol under attack.’Footnote 113 If any of money raised by the Big Lie ended up in Trump’s hands, that could be a key element in proving wire fraud in a federal criminal case or common law fraud in a state criminal case. In other words, Trump was not lying about the outcome of the 2020 election only to puff up his own ego – he was lying for monetary gain, which is a factor that triggers criminal liability.
8.7 Conclusion
Within weeks of the Select Committee making criminal referrals to the DOJ, Attorney General Merrick Garland appointed Jack Smith Special Counsel to investigate (1) Trump’s alleged retention of classified documents at Mar-a-Lago after he was president and (2) crimes that may have been committed during the events leading up to and on 6 January 2021.Footnote 114 As Jack Smith’s office became a subpoena factory, the Save America PAC funded lawyers for witnesses close to Trump.Footnote 115 And press reports indicate that the Special Counsel’s office is investigating the Save America PAC.Footnote 116
The Special Counsel’s investigations go directly into the inner circle running Trump’s 2024 election campaign. Presently, instead of having a traditional ‘campaign manager’, the Trump 2024 campaign has four senior advisors: Brian Jack, Chris LaCivita, Jason Miller and Susie Wiles, who is also the CEO of the Save America PAC.Footnote 117 Wiles is a key witness in Trump’s federal criminal case about classified documents.Footnote 118 She is apparently referred to as ‘PAC Representative’ in Trump’s Mar-a-Lago indictment, and Trump allegedly showed her a classified map even though she did not have the security clearance to see it.Footnote 119 She worked on Trump’s 2016 election as its Florida chair, and she also worked on his 2020 election until being fired in September 2020.Footnote 120 But she is back for round three with Trump’s 2024 campaign.Footnote 121 For the past few years, Wiles worked out of both of Trump’s resorts, Bedminster and Mar-a-Lago.Footnote 122 At present, Wiles has been accused of no wrongdoing by prosecutors, though it is possible that prosecutors are considering charges related to how the Save America PAC acted while she ran it.Footnote 123 She has been interviewed by the Special Counsel’s office several times.Footnote 124
When this chapter was written, Special Counsel Smith had not charged any actions related to fundraising using the Big Lie or its closely related problem of the Big Rip Off.Footnote 125 Nor had the Fulton County District Attorney. Smith’s indictment has been analogized to a rifle shot, while the Fulton County indictment was compared to a shotgun blast.Footnote 126 But the lack of fundraising-related charges is not because these avenues are legally foreclosed. Indeed, as this chapter has demonstrated, the DOJ frequently charges similar campaign finance cases with wire fraud. With four charges already pending against ex-President Trump for his actions surrounding January 6, it is possible that viable charges about wire fraud were left out just to streamline the Special Counsel’s case against Trump.Footnote 127 Additionally, the Fulton County Prosecutor did not focus on the fundraising in her sprawling Racketeer Influenced and Corrupt Organizations Act indictment of ex-President Trump and his eighteen co-conspirators.Footnote 128 It is still possible that the Fulton County District Attorney could charge common law fraud offenses related to Save America PAC’s fundraising that defrauded residents of Fulton County, Georgia, of their hard-earned money.Footnote 129 Prosecuting these crimes would be better for future elections as it would disincentivize other politicians from copying Trump’s fundraising using disinformation.
9.1 Introduction
Chile’s regulation of fake news dates back nearly a century. The initial instance occurred in 1925 during a constitutional crisis that resulted in the drafting of a new constitution. At that time, a de facto government issued a decree making it illegal to publish and distribute fake news. The second regulatory milestone occurred during the dictatorship of General Augusto Pinochet with the inclusion of provisions related to defamation in the 1980 constitution. Defamation involved spreading false information through mass media to unjustly tarnish someone’s reputation.Footnote 1 Upon the restoration of democracy in Chile in 1990, these stipulations were permanently abolished from the legal system.Footnote 2 Since 2001, the judicial pursuit of disinformation in Chile has been limited to exceptional means such as the State Security Law or, indirectly, through the right to rectification.Footnote 3
Nearly a century after the initial criminalization of fake news in Chile’s history, the discussion has resurfaced. In recent years, several bills have been proposed to penalize the dissemination and propagation of fake news. The current government has set up a commission against disinformation to, among other objectives, evaluate the impact of disinformation on democracy and contribute to the design of a policy against it.Footnote 4 As in the past, the drive to regulate comes at a time of constitutional crisis with a high degree of political tension and instability. As with prior instances, the drive towards regulation could be perceived as an authoritarian endeavour to suppress political dissent. Nevertheless, in contrast to previous contexts, the current regulatory drive is occurring within a democratic framework.
Moreover, the phenomenon of fake news has assumed global dimensions – prompting the present volume – and responds to technological transformations that have accelerated its production and dissemination. In Chile, these shifts have not just influenced the dynamics of shaping public opinion but also destabilized the system of fundamental rights where the right to freedom of expression and information lie. This disruption is due to the fact that the authority to define the scope of online freedom of expression, its boundaries (including false or incorrect information), and the power to penalize transgressors increasingly rests in the hands of major internet corporations and progressively less within the constitutional framework protecting these rights.
This chapter addresses the phenomenon of fake news through the lens of its regulatory history in Chile. The analysis not only elucidates the factors that prompted its regulation but also examines the array of techniques employed to prevent the propagation of disinformation. From this analysis, it will be possible to conclude that although some of the causes that motivated the regulation of fake news in the past coincide with those of the present, persuasive reasons underscore the renewed interest in their criminalization that were absent in the past. These reasons explain the significance that disinformation has acquired in the present and that motivates the publication of this book. The chapter is divided into four sections. Section 9.2 examines the history of fake news regulation in Chile between 1925 and 1973, which corresponds to the period when the 1925 Constitution was in force. Section 9.3 analyses the constitutionalization of fake news during Augusto Pinochet’s dictatorship. Section 9.4 looks at the criminalization of disinformation in State Security Law. Section 9.5 delves into the prosecution of fake news once democracy was restored in the country and the offences of disseminating fake news and defamation were repealed. Within this section, it will be demonstrated that the lack of regulation resulted in the procedural redirection of the phenomenon through alternative avenues, such as the right to rectification and the so-called Amparo remedy against injury to individuals’ reputation. While these tools have, to a certain extent, facilitated the judicial redirection of prosecuting fake news when disseminated through traditional mass media channels (press, radio and television), the same cannot be stated for those circulating online for reasons to be mentioned here. This brings us to Section 9.6, where the intricacies of fake news in the digital era are examined, along with the impacts they have had on shaping public opinion and the protection of fundamental rights in Chile.
9.2 A Concise History of Disinformation Regulation in Chile (1925–1973)
The regulation of fake news in Chile has been initiated by de facto governments, in times of great political instability and polarization, as well as during constituent processes or constitutional crises. The first of these instances occurred in late 1924, amidst a severe political crisis that pitted the Chilean army against the President of the Republic, Arturo Alessandri Palma. The catalyst for this conflict was the processing of a legislative bill aimed at establishing parliamentary allowances.Footnote 5 This was a significant project as it aimed to ensure parliamentary participation regardless of the economic capacity of the representatives. However, its development sparked substantial tension as lawmakers were determining their own remuneration during an economically difficult period for the country, while failing to enact a set of long-demanded social laws. Among the affected groups was the army, which was lobbying for an adjustment of its wages. During the advancement of the project, a cohort of 200 officers gathered at the premises of the National Congress to express their dissatisfaction. Days following this, the faction issued a series of demands to President Alessandri, which included vetoing the parliamentary allowance, accelerating the process of implementing social laws, replacing specific state ministers and instituting a constitutional reform to establish a presidential regime.Footnote 6 At this point, and despite many of the demands being met, the army no longer responded to constitutional authority. After appealing unsuccessfully to President Alessandri to dissolve the Congress, he ultimately resigned and went into exile in Italy.Footnote 7
On 11 September 1924, a governing junta commanded by the military’s highest-ranking official seized power. This effectively suspended President Alessandri’s constitutional mandate. He would return to the country six months later to lead the constitutional process that resulted in the formulation of the 1925 Constitution.Footnote 8 This period was marked by a high degree of political instability and conflict for the control of power. Moreover, it was characterized by a severe crackdown on the media.Footnote 9 During this period, the junta shut down opposition newspapers, and censorship became a frequently employed weapon for managing political dissent. The justification for censorship was public safety, and it was enacted through the declaration of constitutional states of exception.Footnote 10
In this context, during the final days of the junta’s rule (prior to President Alessandri’s return), Decree Law 425 of 1925, labelled ‘Regarding Abuses of Publicity’, was enacted. This decree replaced the Press Act of 1873, which had a distinctly liberal bent and codified what had been specific harassment and press censorship measures during this time period.Footnote 11 The 1925 decree brought significant alterations to Chilean press regulation.Footnote 12 Notable among them was the abolition of trial by jury for press abuses, as established by the 1833 Constitution. The decree also broadened the scope of offences committable via print or other publication types. Particularly noteworthy is the establishment of the offence of incitement to commit other felonies, such as homicide, robbery, or arson, which is punishable even if the offences are not committed.Footnote 13
This decree introduced the offence of disseminating fake news into the Chilean legal system for the first time. Article 17 imposes custodial sentences and fines for ‘the publication or reproduction of fake news, alleged documents, adulterated materials, or inaccurately attributed documents to another individual’.Footnote 14 For distribution or reproduction to constitute an offence, it must occur orally in public places or through radio broadcasting, or via written materials sold, disseminated, or displayed at public venues or events.Footnote 15 In addition, the provision requires malice to be proven for the execution of such actions and the production of illegal results. Additionally, the decree assigns responsibility not only to the authors of the fake news publication but also to the media outlet’s director. In the director’s absence, liability is transferred to the publisher, and in the absence of the publisher, it falls upon those selling or disseminating the newspapers carrying the news.Footnote 16
The prohibition on disseminating fake news experienced several amendments during its history.Footnote 17 The most notable was introduced in 1967 under President Eduardo Frei Montalva’s government through the new Press Act. Most of its modifications involved limiting the circumstances under which its publication could be prosecuted criminally, extending the accused’s defence opportunities and mitigating the repercussions of its commission.Footnote 18 The first of these amendments eliminated imprisonment for the dissemination of false information, leaving only fines as penalties. A subsequent set of amendments sought to increase the standards for liability enforcement. In this context, the law adds a new element to the definition of the criminal offence, requiring that the falsity communicated be substantive. This implies that the false information must be of significant magnitude to alter the essence of the content constituting the information’s core. Furthermore, the law specifies that the dissemination of fake news must have the potential to seriously harm various public interests enumerated within the provision. These encompass public safety, order, administration, health and the economy, in addition to individual interests such as the dignity, credit or reputation of persons, as well as the interests of legal entities. Additionally, the law affords the defendant, particularly in cases involving mass media outlets, ample opportunities for a robust defence. Notably, the defendant may be exempt from criminal liability if they promptly and exhaustively rectify the falsehood in the disseminated information. Rectification involves a complete acknowledgement of the falsity of the news realized through a publication in the same media outlet, maintaining identical characteristics to the initial publication containing the false information.Footnote 19
These amendments diminished the incentives for judicial prosecution regarding the dissemination of fake news. Nevertheless, such incentives were practically unnecessary, as this offence was scarcely subjected to judicial prosecution between 1925 and 1967. This phenomenon can be attributed to the presence of alternative and more efficient procedural mechanisms. As will be further examined, the State Security Law also regulated fake news. This legislation imposed considerably harsher penalties not only on the individuals responsible for its creation, but also on directors and even media proprietors involved in its dissemination or reproduction.Footnote 20 In comparison to the Press Act 1967, which imposed fines for the dissemination of false news, the State Security Law carries the penalty of imprisonment for a maximum of five years.
Furthermore, Decree Law 425 introduced the right to rectification. This provision obligated mass media outlets to disseminate, free of charge, clarifications or rectifications of information that might have caused offence or made unjustifiable insinuations about an individual. When faced with such material, it became the responsibility of the media outlet to promptly publish clarifications or rectifications upon receiving appropriate notification. Failure to comply with this requirement resulted in penalties such as imprisonment and fines for contempt of court.Footnote 21 This was an easier and faster route to obtain redress from the publication of fake news than judicial actions.
9.3 The Offence of Defamation since the 1980 Constitution
On 11 September 1973, another coup d’état shook the nation. On this occasion, the armed forces overthrew President Salvador Allende and established a brutal military dictatorship that, under the command of Army General Augusto Pinochet Ugarte, remained in power for seventeen years. The dictatorship radically transformed the country’s institutional order and began doing so from the very moment it assumed power. Just days after the coup, the junta entrusted a group of lawyers with drafting a constitutional proposal. In 1978, this Commission concluded its work with what would become the preliminary draft of the 1980 Constitution. The final text was approved through a plebiscite conducted without the existence of electoral records, political parties and a free and independent press.Footnote 22 In terms of freedom of expression and the press, the constitutional text reflected the dynamics that characterized the dictatorship’s relationship with the media throughout its tenure in power.Footnote 23
The boldest stride in the regulation of fake news in the nation’s history took place within the 1980 Constitution, which included an express constitutional provision addressing this issue. This was done in the chapter on fundamental rights, specifically within the provision safeguarding the right to private and public life and reputation. In the relevant part, the provision stipulated that ‘the infringement of this precept’, meaning the violation of these rights, ‘committed through a mass media, and consisting of the imputation of a false fact or act, or that unjustifiably inflicts harm or discredit to an individual or their family, shall constitute an offence and shall be subject to the penalty determined by law’.Footnote 24
The constitutional provision was ambiguous. Not only did it safeguard a broad array of legal interests, including public life – conceptually challenging and of questionable normative value – but also because the criminal offences it encompassed were not precisely defined in either the 1980 Constitution or elsewhere in Chilean law. At first glance, it appeared to include a single offence (referred to as defamation by Chilean legal scholars), yet in truth it contained – at the very least – two distinct offences that are classified as offences against reputation.
The disjunctive conjunction ‘or’ (highlighted above in italics) separated the first from the second. Both were violations of a person’s private or public life or reputation and could only be perpetrated ‘through mass media’. The first consisted of attributing a false fact or act, whereas the second referred to any attribution (false or true) that unjustifiably causes damage or discredit to a person or their family.Footnote 25 Although the latter was equivalent to libel expressed through mass media outlets (enforced at the time by the 1967 Press Act), the former was similar to spreading fake news (also covered by the 1967 Press Act), with the distinction that the criminal offence in the constitutional provision seemed not to require malice from the party making the imputation. In fact, it is only in the second offence that the inflicted harm is required to be unjustified. In other words, what this particular offence seemed to pursue was an objective liability for the attribution of false facts or acts that harm the protected constitutional interests in question.
During the dictatorship, the ambiguities of the constitutional provision were only partially addressed by a law that shaped and established penalties for these criminal offences. This law included two different offences based on the protected constitutional interests. The first of these protects people’s private lives and is not relevant for the purposes of this chapter. The second relates to the protection of people’s public lives and states that ‘anyone who, without intent to libel, maliciously attributes a false fact related to their public life to a person’ through a mass media outlet ‘that causes or could cause material or moral harm’ will face imprisonment and fines as indicated in the provision. This provision establishes a distinct offence against reputation that protects a person’s public life, penalizing the deliberate attribution of a false fact via a mass media source. As mentioned earlier, Chilean legal scholars commonly refer to this offence as defamation.
Even though the repressive political landscape made this provision practically unnecessary, the offence of defamation offered authorities substantial advantages in prosecuting critical publications compared to other press law offences, such as libel and publication of fake news. There were two advantages regarding libel. The first relates to the protected constitutional interest. Libel protects a person’s reputation, which is a legally established concept with a long historical tradition and substantial doctrinal development that help define its scope and facilitate its judicial application. On the other hand, ‘public life’ lacks these traits and is basically ambiguous, allowing it to be infused with any meaning the judge decides to attribute to it. Furthermore, safeguarding public life as a fundamental right is a reversal of the logic of fundamental rights, which protect individual rights against state authority. In this case, it is the political authority that is shielded from the scrutiny by the press and from the right of every citizen to be well informed, particularly about how the authorities wield power.
Another benefit of defamation over libel is malice. Libel requires the complainant to prove not merely that the defendant acted with malice, that is, with an intention to harm the offended party through illegal behaviour. According to well-established precedent, the complainant in a libel case must also demonstrate that the defendant had a specific intent, frequently referred to by legal scholars and case law as animus injuriandi: the intent to offend.Footnote 26 In this regard, the Supreme Court in a relatively recent judicial decision has indicated that the animus injuriandi is a ‘distinct subjective element from malice, revealing a direct predisposition to harm one’s reputation, and without which sanctioning for said offence is not possible’.Footnote 27
The key issue here is not just that the absence of animus injuriandi precludes criminal liability, but that the presence of other types of motivations, such as the purpose to inform, precludes animus injuriandi as well. As a result, courts have historically been reluctant to convict mass media outlets of libel in circumstances involving publications that, while insulting to an individual, serve informative purposes. This is what motivated Enrique Ortúzar, the President of the Commission (bearing his name) that drafted the preliminary proposal for the 1980 Constitution during the dictatorship, to advocate for the inclusion of the offence of defamation within the constitutional framework.Footnote 28 In a session of the Commission, Ortúzar argued that in cases of libel ‘judges considered the presence of animus injuriandi as a condition for the existence of the criminal offence, and there was no professional libeller who, when brought to court, would not claim that there was no animus injuriandi in their conduct, thus largely avoiding any criminal liability’.Footnote 29 The provision of defamation explicitly excluded this intent by stating that the law applied to ‘anyone who, without intent to injure, maliciously attributes a false fact related to their public life to a person’. This granted defamation significant procedural advantages over libel.
Defamation also offered more advantages for criminal prosecution than the offence of publishing fake news. In fact, it is important to recall that while the Press Act of 1967 kept this offence (which originated from Decree Law 425 of 1925), it introduced significant modifications to make judicial prosecution of the crime more demanding. The first of these is that the falsehood of the news must be substantive and significantly harm a public right or interest as specified by the law. The second is that the defendant may avoid criminal liability by admitting the falsehood and rectifying the information promptly and completely.Footnote 30 None of these limitations applied to the offence of defamation. On one hand, the law made no mention of the quality of falsehood, merely criminalizing the attribution of a false fact concerning a person’s public life. On the other hand, the provision did not allow rectification of the information in order to avoid criminal liability. In contrast to the imposition of fines for the publication of false news, defamation was subject to penalties potentially resulting in up to five years of imprisonment. The aforementioned characteristics have endowed defamation with significant efficacy in maintaining the existing status quo, while simultaneously posing a formidable threat to democratic principles and freedom of speech. It is not surprising, as we will see later, that upon the restoration of democracy in Chile this offence was swiftly removed from the legal system.
9.4 Fake News as a Threat to State Security
Before analysing the regulation of false news in Chile following the restoration of democracy, it is necessary to provide a concise overview of the State Security Law, which also criminalizes the dissemination of fake news. This is the only law that remains in force to sanction such dissemination. It establishes that individuals commit crimes against the security of the state if they, among other things:
propagate verbally or in writing or through any other means, or send abroad, tendentious or fake news information intended to destroy the republican and democratic regime of government, or to disrupt the constitutional order, the country’s security, the economic or monetary system, price normalcy, stability of values and public assets, and the supply of populations, and Chileans who, while outside the country, disseminate such news abroad.Footnote 31
This provision extends back to another dictatorship, which lasted from 1927 to 1931 and was led by General Carlos Ibáñez del Campo. Towards the end of this period, during a moment of high political tension that eventually forced Ibáñez to resign from power and go into exile in Argentina, he issued a decree that contained the initial version of the aforementioned provision.Footnote 32
In its current form, the offence of disseminating deceptive or fake news, as outlined in the State Security Law, constitutes an offence of endangerment. What is pursued is disinformation that could be detrimental to state security or economic public order, without the need for actual harm or a specific threat to legally protected interests.Footnote 33 During Pinochet’s dictatorship, individuals were convicted of this offence under these terms. For instance, a woman was convicted of transporting anti-government documents on a flight to Europe, despite the fact that these documents never reached the people they were intended for. The lower court judge was incapable of distinguishing the political nature of the information and ruled that ‘labelling the military government as a “dictatorship” or using similar terms constitutes a false statement because our country has both a democratic and a republican system of government’.Footnote 34
Notably, prior to 2001, the law allowed the courts to suspend media outlets for up to ten days if they committed an offence sanctioned by this law, such as the dissemination of false news, and to confiscate immediately any editions in which the commission of these offences is evident.Footnote 35 In addition to the authors, the directors of the media outlet where they were published and potentially their owners and even printers if the latter did not exist were also criminally liable.Footnote 36 Ultimately, the penalties associated with these offences have historically been much more severe than those related to the Press Act-regulated dissemination of fake news.Footnote 37
9.5 The Return to Democracy and the End of Fake News Regulation
Upon the restoration of democracy in Chile, the government led by President Patricio Aylwin made the normal operation of the mass media a top priority. This endeavour required a substantial transformation of the legal framework governing the press, which was accomplished in two legislative stages. The first step consisted of an urgent measure to align existing legislation with the democratic reality, in which freedom of expression would play an important role. The second stage, which spanned a significant amount of time, involved the drafting of a new Press Act designed to effectively protect the freedom of the press and strengthen the media’s contribution to the democratic process.
On 17 April 1990, less than a month after assuming the presidency of the Republic, Patricio Aylwin submitted to the National Congress a bill intended to adapt press legislation to the country’s new political and institutional realities. The purpose of the bill was to eliminate all restrictions imposed by the law that impeded the free exercise of speech.Footnote 38 This law introduced two significant amendments that are especially relevant to this chapter. First, it repealed the criminal offence of defamation, which was introduced to the Press Act by the ruling junta in 1985. A broad spectrum of political parties agreed that the regulation of defamation enacted during the dictatorship severely impeded the exercise of free speech, and that the legal interests it protected were already covered by the slander and libel laws. In contrast to the brief duration of legal defamation, the constitutional provision lasted longer, as it was not until 2005 that the necessary political support was gathered to implement one of the most extensive and significant reforms to the 1980 Constitution. This reform included the elimination of defamation from the Constitution, as well as numerous other modifications.Footnote 39
The second relevant amendment pertains to the offence of disseminating false news under the Press Act 1967. Although retaining the core elements of its regulation, it modified the liability of mass media outlet owners, editors and directors. Prior to the reform, they were all jointly responsible for false information spread through their media. In three instances, the reform absolved them of liability. First, when it involves the simple reproduction of information or news from news agencies or information provided by public authorities on matters within their jurisdiction, or from individuals or institutions that, in the court’s opinion, are reasonably reliable or appropriate in relation to the subject. The second scenario is in the case of live radio or television broadcasts, assuming the media outlet exercised reasonable care to prevent their dissemination. Lastly, they are exempt from liability in the event that fake news is disseminated in programmes or sections that are accessible to the public, with an explicit statement that what is broadcast there does not bind the media entity.Footnote 40
The second legislative effort, related to the democratization of mass media, was a lengthy endeavour initiated by President Aylwin during the return to democracy. In July 1993, the president sent a bill to Congress, and eight years later, in May 2001, the ‘new’ Press Act went into effect. Some of the objectives of this law were to democratize the mass media and consolidate of a set of issues that were scattered across various legal statutes. Despite the fact that the contents of the new law were not radically changed from those of the 1967 Press Act, there are some significant differences. Among them, the principle of pluralism was incorporated for the first time in the history of press regulation in Chile. This was intended to promote the expansion of mass media outlets and to encourage the diversification of informational content.Footnote 41
Significant modifications were also made from the perspective of reducing punitive measures. First, the legislation repealed the so-called offence of contempt under the State Security Law. This offence included libel, slander and defamation directed at the President of the Republic, ministers, parliamentarians, members of higher courts of justice and other important authorities. It was essentially a form of seditious libel aimed at preventing hostile attack against government and preserving public order. During the dictatorship, the government made extensive use of this provision and its application continued during the transition to democracy. Indeed, its use eclipsed reliance on other, related offences (such as disseminating fake news).Footnote 42 At that time, many international human rights organizations severely criticized the existence of the offence of contempt.Footnote 43 With its repeal, Chile made a significant step towards democratization and the expansion of freedom of speech. As previously explained, a second substantial change to the State Security Law was the elimination of the provision that authorized courts to suspend media outlets and confiscate their publications in connection with convictions for violations of the law. In the same vein, the strict liability of directors and owners of media outlets was eliminated and a common procedure for adjudicating criminal and civil liability for offences committed through a mass media outlet was established.
A number of long-standing criminal offences contained in the Press Act 1967 were also repealed by the Press Act 2001. For the purposes of this chapter, the dissemination of fake news was the most relevant one. Undoubtedly, its repeal was a significant historical event, as it signalled the end of a provision that had been in effect in Chile for an important portion of the twentieth century. However, despite its long existence, this offence had a limited impact in judicial practice. Very few cases exist in which the courts actually applied this provision, and even fewer cases in which they used it to convict someone of spreading fake news.Footnote 44 Numerous reasons explain why this was so. First, fake news was not only governed by the Press Act, but also by the State Security Law. This law provided more favourable conditions for criminally prosecuting the dissemination of fake news than the Press Act did.Footnote 45
As a result of the 2001 repeal of the fake news law, alternative measures have been taken to redirect its prosecution. The right to rectification and clarification was one of them. The Decree Law 425 of 1925 first granted this power, which the Press Act of 1967 subsequently reaffirmed, and which the Press Act of 2001 again reaffirmed. The right to rectification and clarification holds constitutional status in the Chilean legal system. Incorporated initially in 1970 as an amendment to the 1925 constitution, it was also enshrined in the 1980 Constitution and is still in effect today.Footnote 46 This right allows any person who has been unfairly mentioned or offended by a mass media outlet to demand that the outlet publish a clarification or retraction. The responsible media outlets are required to publish it, and if they fail to do so, the offended party may pursue legal action.Footnote 47 The courts have used the right to rectification in various ways in relation to disinformation and misinformation. For example, courts have recently indicated that the rectification procedure is appropriate for challenging the dissemination of false news,Footnote 48 and have ordered media outlets to correct any such news and imposed fines for failing to do so.Footnote 49 On the other hand, courts have occasionally considered the rectification by media outlets regarding the dissemination of false news as pertinent evidence to exempt them from civil liability for damages or constitutional liability for the violation of fundamental rights.Footnote 50
Lastly, false news has also been pursued through constitutional measures that protect fundamental rights. This is the Amparo remedy, which offers a rapid procedure and grants courts broad powers in the face of violations of constitutional rights, whether they originate from a state entity or even from a private entity (such as a media outlet). Although some courts have been reluctant to grant these remedies,Footnote 51 arguing that there are legal avenues to resolve such disputes, a significant number of cases have seen these remedies granted. They mainly involve situations where the dissemination of false news harms a person’s reputation. As reputation is a constitutionally enshrined right and is safeguarded by the Amparo remedy, the courts have accepted these actions and ordered media outlets to rectify false information they have published.Footnote 52
9.6 The Regulation of Disinformation in the Digital Context
Over two decades after the repeal of the offence of dissemination of fake news from the Press Act, a new regulatory drive has emerged in Chile. In June 2023, the government established a commission against disinformation, responsible for analysing the phenomenon and its implications for democracy. Among its primary responsibilities is the provision of recommendations to the relevant authorities regarding the steps necessary to develop a comprehensive public policy to address the challenges posed by disinformation and its detrimental impact on democratic processes.Footnote 53 Furthermore, beginning in 2020 and continuing onward, no fewer than six bills have been presented in the National Congress aimed at criminally penalizing the dissemination of false news in the country. These bills have garnered support from parliamentarians from across the political spectrum, reflecting a broad consensus regarding the urgent need to address the proliferation of false information, particularly on digital platforms and during electoral cycles.
Does this renewed desire to regulate the dissemination of false news correspond to a similar motivation that led to its regulation in the past? Fake news has historically been regulated in Chile by de facto administrations and during periods of high political polarization. Moreover, regulation has occurred during constitutional processes,Footnote 54 as a result of those processes,Footnote 55 or immediately following the ratification of a new constitution.Footnote 56 Since October 2019, Chile experienced a constitutional crisis triggered by a massive popular uprising that engaged millions of people nationwide. In response to this pervasive discontent among citizens, a political consensus was reached to initiate a constitutional process aimed at addressing these issues through institutional means.Footnote 57 Two attempts were made to replace the constitution through democratically elected conventions.Footnote 58 Both failed as the public rejected the constitutional proposals in a referendum held in October 2022 and then in December 2023.Footnote 59 Unquestionably, this process has occurred in an atmosphere of high tension and political polarization, similar to the political climate that characterized different historical periods in which Chile has regulated fake news. Despite these similarities, there is a significant difference between the political conditions motivating the current regulatory interest, as Chile is presently governed under a democratic system.
Despite the political parallels between the contemporary interest in regulating false news and its regulation in the past, significant differences exist that distinguish contemporary efforts to combat disinformation and misinformation from those of the past. All of them can be attributed to the technological transformations that have reconfigured the public opinion formation process in the modern world.
The transition to digital technologies has significantly altered what Jack Balkin refers to as the ‘infrastructure of freedom of expression’ – that is, the technologies and institutions that people use and trust to inform and communicate.Footnote 60 Today, our ability to inform and communicate depends significantly on large private companies such as Google, X (Twitter), Meta (Facebook), YouTube and TikTok. These companies have created enormous user communities worldwide, so much so that in January 2023, the latter three combined had more than 6 billion active users.Footnote 61 The phenomenon of disinformation and misinformation has evolved significantly within this framework. The legal regulatory structure in Chile has, to date, not been updated to address the new challenges that these technologies have created.
Pivotal events like the 2016 UK referendum on exiting the European Union and the election of Donald Trump as President of the United States of America during the same year were early indicators of the vast scale and scope of the new issues associated with disinformation and misinformation. These events underscored the digital infrastructure’s capability to generate and disseminate false information, on a vast scale, carrying profound implications for electoral processes and the overall democratic system. Although an in-depth exploration of the structural factors contributing to the expansion of false news in the digital realm is beyond the scope of this chapter, certain aspects warrant elucidation to grasp its distinctive nature and differentiate the current regulatory efforts against false news in Chile from past initiatives.
The first point underscores the remarkable level of user concentration that prominent internet corporations have achieved in recent years.Footnote 62 Initially conceived as a realm promoting freedom and democratization in communication, the Internet represented a shift away from the vertically centralized content production model dominated by major media corporations. It embraced a decentralized model, allowing individual users to create and distribute content under conditions resembling those of traditional media outlets.Footnote 63 However, this ideal was overshadowed by the explosive growth of major internet companies, which solidified their dominance by expanding their global user bases. Billions of users now converge on a single network, effectively competing with established news publishers for attention. Unlike these publishers, who generate their content – requiring careful curation to retain audience trust and avoid legal liabilities – major internet firms and platforms serve as intermediaries, organizing or disseminating content produced by third parties, generally lacking editorial control over it.Footnote 64
The primary challenge stems from the business model employed by these companies, which depends on retaining users’ attention for extended durations, and evidence suggests that sensational content tends to attract more attention.Footnote 65 In essence, major internet corporations possess incentives for the widespread dissemination of false news that traditional media outlets either lack or possess to a lesser degree. Furthermore, users receive this content as a result of potent algorithms that determine their political preferences based on their online searches and likes.Footnote 66 These users then become potential conduits for propagating this information across highly concentrated networks that facilitate rapid and extensive dissemination. The combination of these factors has produced an unprecedented capacity to disseminate false or misleading information at an unprecedented rate and to an audience unparalleled in human history.
Section 230 of the Communications Decency Act is a second important legal factor that further explains the ease with which fake news spreads on the Internet. Except in cases involving federal offences or violations of intellectual property rights, this provision provides internet intermediaries with complete immunity regarding illicit content posted on their platforms or services. One of its primary goals was to eradicate the ambiguity created by precedents that determined intermediary liability based on the amount of editorial control they exercised over content.Footnote 67 If intermediaries exercised editorial control, they were considered publishers and held liable for illegal content; otherwise, they were regarded as distributors and were exempt from liability for the same content. The issue was that courts frequently struggled with these classifications, causing uncertainty for intermediaries regarding potential legal actions and discouraging self-regulation through content moderation on their platforms due to the elevated risk of being classified as publishers.Footnote 68
Section 230 of the Communications Decency Act eliminates the ambiguity surrounding intermediary liability by granting them complete immunity with respect to illegal content circulating on their platforms or services. Internet intermediaries in the United States that host or transmit third-party content are therefore exempt from a set of laws that would otherwise hold them accountable. This provision is frequently praised as the most important rule protecting freedom of expression on the Internet and a driving force behind its current form.Footnote 69 Some scholarly literature has even described it as a mechanism intended to foster the economic growth of internet companies, aligning with the principles of the First Amendment of the United States Constitution by preventing any state interference in the exercise of their freedom of expression.Footnote 70
The immunity rule of Section 230 of the Communications Decency Act (CDA) has an impact on the current interest in regulating the dissemination of false news in Chile. In fact, the Chilean Press Act 2001 targets media outlets. They are the entities that will be held accountable for the offences or abuses, specified by the law. Although this law abolished the crime of disseminating fake news, the right to clarification and rectification has been used to address these issues since its repeal.Footnote 71 In certain cases, media-based libel and slander have also been used for the same purpose. The problem is that the law is not clear about the legal status of internet intermediaries and digital platforms, such as Meta, X or YouTube.
Courts have occasionally categorized these services as media outlets, but this has not always been the case, and they have never held them liable for the crimes outlined in the law. Furthermore, because many of these companies are headquartered in the United States, they are not subject to the jurisdiction of Chilean courts, remaining entirely subject to Section 230 of the CDA. The immunity rule means that the limited legal options that the Chilean legal system currently provides for the prosecution of false news do not apply to the platforms that most vigorously disseminate such news at present.
Another significant effect of Section 230 of the Communications Decency Act is relevant to Chile’s ongoing efforts to regulate false news. This provision not only absolves intermediary services of liability for illegal content posted on their platforms, but also grants them broad authority to define the scope of acceptable expressions within their domains. Remarkably, it grants them immunity for actions taken in good faith to restrict access or availability of offensive material. This has prompted self-regulation among digital platforms and contributed to the development of sophisticated content moderation systems. These systems frequently involve centralized regulatory bodies responsible for formulating moderation policies, establishing rule-based frameworks delineating the boundaries of freedom of expression and creating adjudicatory entities tasked with resolving user disputes using procedures they independently devised.Footnote 72 In both their operational principles and organizational structure, these systems bear striking similarities to legal systems, particularly the model of the First Amendment of the US Constitution. Notable is the fact that those responsible for designing these moderation systems, shaping their procedures and training individuals engaged in content moderation are frequently US legal professionals educated in First Amendment doctrine and rooted in a legal and political culture they export to global user communities.Footnote 73
These systems have effectively transformed major digital platforms into influential governance structures that not only define the fundamental rules of communication within the digital domain but also enforce these rules through intricate moderation mechanisms.Footnote 74 The magnitude of the power amassed by these major digital platforms is such that, particularly in the Chilean context, it poses a potential threat to the established framework of fundamental rights that underpins the right to information and freedom of expression. This framework is enshrined in the Constitution, and the regulation of these rights is delegated to legislative acts. It operates under the supervision of a Constitutional Court, which examines whether legal provisions regulating freedom of expression are consistent with the Constitution. In contrast, conflicts involving an individual’s freedom of expression are resolved by the courts, which can directly invoke the Constitution to protect these rights or, when necessary, employ legal provisions to assess and penalize unlawful or abusive expressions.Footnote 75
In sum, major digital platforms pose a direct threat to the principles of the Chilean constitutional system. By autonomously defining the limits of acceptable speech on their platforms, they affect the basic principle according to which any restriction or limitation on the right to information and freedom of expression must be derived from statutory law. Moreover, because decisions made by these platforms in this context are beyond the jurisdiction of the Constitutional Court, judicial review is undermined. Furthermore, since major digital platforms fall outside the jurisdiction of Chilean courts, illegal content posted on them may remain immune, leaving the fundamental rights of their users inadequately protected under the Chilean legal system. This is why the current interest in regulating false news must also be understood as an effort to regain control over the basic rules governing the scope of freedom of expression within the basic system of fundamental rights. These factors represent novel variables that were absent from the equation that prompted the regulation of fake news in the past, and they are essential to understanding the impetus behind the current drive for regulation.
9.7 Conclusion
The Chilean history of regulating fake news demystifies the notion that disinformation is a contemporary phenomenon. A century-old regulation reveals historical patterns that can offer insights into current approaches. One such pattern is the tendency to introduce regulations during periods of heightened political instability, often in the wake of constitutional processes or immediately thereafter. These regulations typically emerge under de facto governments. Conversely, they tend to be relaxed or abolished during democratic periods. This historical trend suggests that the regulation of fake news in Chile has been employed as a tool to quell political criticism, a stance at odds with the principles of free speech.
The current impulse to regulate fake news in Chile, much like previous instances, arises against the backdrop of a constitutional process marked by intense political polarization and instability. However, the present situation is unique in that it unfolds within a democratic government framework. Of greater significance, fake news has evolved into a global threat to democracy, primarily due to the ease with which disinformation and misinformation can spread in today’s digital public sphere. These circumstances introduce new arguments in favour of regulating fake news. Another pertinent argument applies specifically to Chile’s situation. As digital platforms assume increasingly influential roles of governance in digital communication, wielding the power to establish and enforce intricate systems of communication norms, they have begun to impact the fundamental rights framework, including free speech and the right to information. Concurrently, basic constitutional principles such as the rule of law, due process and transparency are now compromised.
Rather than serving as a tool to suppress political dissent, fake news regulation could be an appropriate means of tackling these issues. When tailored to address the regulatory challenges posed by digital infrastructure to free speech, it must be designed within a framework that upholds fundamental rights and aims to prevent significant threats to democracy. Lessons from past regulatory experience can offer valuable guidance in this endeavour.
10.1 Introduction
In the digital age, the landscape of information dissemination has undergone a profound transformation. The traditional boundaries between information and news have become increasingly blurred as technology allows anyone to create and share content online. The once-excusive realm of authoritative media outlets and professional journalists has given way to a decentralized public square, where individuals can voice their opinions and reach vast audiences regardless of mainstream coverage. The evolution of the digital age has dismantled the conventional notions of journalism and reshaped how news is obtained and interpreted. This shift has paved the way for the proliferation of fake news and online disinformation. The ease with which false information can be fabricated, packaged convincingly and rapidly disseminated to a wide audience has contributed to the rise of fake news. This phenomenon gained global attention during the 2016 US presidential election, prompting nations worldwide to seek strategies for tackling this issue.
In South Korea, the regulation of disinformation has emerged as a significant concern, particularly during election periods, and the impact of disinformation intensified amidst the backdrop of the COVID-19 pandemic. The extent of disinformation’s influence has prompted considerable attention. According to a 2023 global survey conducted by the Reuters Institute for Journalism at Oxford University,Footnote 1 66 percent of Korean respondents expressed concern about disinformation. This places South Korea as the ninth highest among the forty-six countries surveyed, underscoring a significant degree of concern within the population. Notable is the focus on disinformation in the political sphere. Among Korean respondents, 40 percent identified politics as a prime target of disinformation, which is twice the percentage for disinformation concerning the economy (21 percent) or COVID-19 (21 percent). This emphasis on political disinformation suggests a considerable weariness among the public and a heightened sense of apprehension regarding the potential repercussions of disinformation on democratic processes and values.
The increasing need to address disinformation has led to a call for the regulation of media platforms. The Korean National Assembly has proposed more than forty bills aimed at regulating disinformation since 2017, but those bills faced challenges in passing due to various reasons. This chapter delves into the legislative struggle aimed at regulating disinformation in South Korea. Commencing with a historical overview of disinformation regulation, the chapter proceeds to scrutinize the existing legal framework employed to counter disinformation. It further categorizes the fake news bills introduced since 2017. Subsequently, it investigates the delegation of responsibility to digital platforms for regulating disinformation, evaluating the current state of self-regulation and potential avenues for enhancement.
10.2 History of Regulating Online False Expressions
The Constitution of Korea has consistently enshrined the principle of freedom of expression. However, this freedom has undergone revisions to different degrees during the nine instances of constitutional amendment. On occasion, the scope of freedom of expression was subject to qualifications, contingent upon the inclusion of the phrase ‘except as specified by law’ within the constitutional text.Footnote 2 In the context of this constitutional history, efforts to fight against disinformation have been underway since the 1970s.
The most notable historical attempts to regulate disinformation occurred during the dictatorship in the 1970s. On 17 October 1972, President Park Chung Hee issued a special presidential declaration, which dissolved the National Assembly and suspended the constitution. This proclamation effectively established a long-term dictatorship. The regime, concerned about the dangers of free speech and a free press to its stability, adopted a series of emergency measures, with the first being Emergency Measure No. 1 in 1974. This measure explicitly forbade the ‘fabrication or dissemination of rumors’ (Article 3) and also prohibited the act of ‘broadcasting, reporting, publishing, or otherwise communicating rumors to others’ (Article 4). The autocratic government, under Emergency Measure No. 1, utilized its provisions to arrest and punish critics of the regime who were accused of spreading rumors. Similarly, Emergency Measure No. 9, enacted in 1975, criminalized the act of ‘fabrication and disseminating rumors or distorting facts’.
These Emergency Measures were primarily aimed at restricting criticisms of the president and the government through regulating rumors. They also provided a convenient means of stifling dissent and punishing dissenters. In addition, the Emergency Measures went as far as prohibiting any debate of constitutional revision, effectively suppressing citizens’ expression of political views and infringing upon their fundamental rights. The enforcement of the Emergency Measures led to the prosecution of over 1,100 people, primarily consisting of those who expressed dissent against the dictator and the government. For instance, a man who criticized President Park Chung Hee was subject to a seven-year term of imprisonment on charges of fabricating rumors.
More than thirty years later, the South Korean Supreme Court made significant rulings declaring the Emergency Measures, which severely restricted people’s freedom of expression, unlawful. In a retrial held in 2010, the Supreme Court acquitted a defendant who had previously been sentenced for spreading falsehoods. The court ruled that Emergency Measure No. 1 was unconstitutional and invalid due to its excessive restriction on an individual’s freedom of expression and physical integrity.Footnote 3
After the Supreme Court’s ruling, the Constitutional Court further solidified its stance in 2013 by declaring Emergency Measures Nos. 1, 2 and 9 unconstitutional, citing the same constitutional concerns the Supreme Court identified when it invalidated Emergency Measure No. 1.Footnote 4 The Constitutional Court’s decision stemmed from its assessment that the provisions punishing rumor-mongering were not only abstract and vague in their criminal elements but also exhibited an excessively broad scope of application. As a result, the Court elucidated, this provision made it challenging for citizens to anticipate which conduct would be deemed prohibited under the law. These historical instances illustrate how the criminalization of disseminating falsehoods can be wielded to stifle dissenting voices critical of the government. In reaction to this, the Korean courts have held that laws seeking to curb freedom of expression through the criminalization of falsehood dissemination must be narrowly drawn to be constitutional.
Although the prohibition on spreading rumors was limited to a specific period through the Emergency Measures, under normal circumstances the spreading of falsehoods could fall under the purview of regulation through the Basic Telecommunications Act. Article 47(1) of the Act states that ‘any person who issues a false communication through telecommunications facilities with the intention of harming the public interest shall be subject to punishment, including imprisonment for up to 5 years or a fine up to 50 million Won (approximately $37,600)’. Initially, this provision was crafted to penalize individuals who used landline phones by adopting a fabricated identity or using another person’s name.
Since its enactment in 1961, Article 47(1) of the Basic Telecommunications Act had remained inactive for over forty years, with no recorded instances of its application for actual penalties. This inactivity can be attributed to the absence of documented cases involving deceptive communications using someone else’s identity. Nonetheless, the emergence of the Internet in the 2000s revitalized this provision, reinstating its significance in overseeing online communications. Article 47(1) of the Act was employed to punish online users who propagated false information.Footnote 5
An illustrative case highlighting the implementation of Article 47(1) was the notable Minerva case. The individual behind the username ‘Minerva’ gained online prominence for astute predictions and insightful economic analyses. However, Minerva found himself facing charges subsequent to posting on a popular portal site that the government’s foreign exchange reserves had been depleted, and currency exchange had been prohibited. These posts incited panic among online users and triggered a decline in stock prices. In the aftermath, the finance minister accused Minerva of spreading falsehoods regarding financial policies, leading to Minerva’s arrest. The charges against Minerva were based on the alleged spread of falsehoods that were deemed harmful to the public interest, as stipulated under Article 47(1) of the Basic Telecommunications Act.
The Constitutional Court, however, ruled that Article 47(1) was unconstitutional.Footnote 6 The Court held that this clause, as it restricts freedom of expression and imposes criminal penalties, must adhere to a rigorous standard of protecting free expression. This includes a requirement that it must be narrowly drawn to address potentially serious social harms. Although the provision prohibited misleading communication with the intention of ‘harming the public interest’, the Constitutional Court found that the definition of ‘public interest’ was too ambiguous and imprecise, making it challenging for even legal experts to determine whether particular speech activity harms the ‘public interest’ or not. The Court noted that such judgments of ‘public interest’ are likely to vary significantly based on the individual values and ethics of each person. Consequently, the Constitutional Court concluded that the provisions failed to inform the accused which purposes of communication were prohibited among the generally allowed online communications. This lack of clarity violated the requirement for clear guidance demanded by free expression and the principle of clarity in criminal justice.
The Justices expounded on the legality of spreading falsehoods as follows:
The evolution of Internet communication into the ‘most participatory marketplace’ and a ‘medium for fostering expression’ empowers recipients of information to access a diverse array of sources, while also facilitating real-time challenges or rebuttals to certain assertions. It proves arduous to assume that the aforementioned possibilities will be entirely obstructed by the inherent attributes of communication, encompassing elements such as anonymity and indiscriminate dissemination. Therefore, it cannot be conclusively asserted that there exists a distinct risk of encroaching upon the public’s entitlement to accurate information of fomenting criminal activities or disturbances in the national order solely due to the dissemination of false facts.Footnote 7
The Justices pointed out that false information does not inherently jeopardize the public interest or undermine the advancement of democracy. It also serves the additional function of directing societal attention towards pertinent matters and encouraging public engagement. Even if false communication itself does not inherently inflict societal damage, according to the Court, the state’s broad and paternalistic intervention, employing vague criteria such as ‘for the purpose of harming the public interest’, lacks sufficient justification. The Justices emphatically elucidated in their supplementary opinion that assessing the value and detriment of expression or information ‘ought not to be initially determined by the state; instead, it should be entrusted to the self-correcting dynamics of civil society and the competitive interplay of ideas and viewpoints’.Footnote 8
Following the Constitutional Court’s ruling that Article 47(1) was invalid, the clause lost its effectiveness and Minerva was finally acquitted. However, a significant concern arose from the absence of a comprehensive rule that had formerly penalized online dissemination of falsehood jeopardizing public safety. To rectify this legal loophole, several amendments have been put forth in the National Assembly but the provision remains unmodified as of now.
Article 47(1) of the Basic Telecommunications Law necessitates a more precise and constitutionally sound amendment, particularly in light of the absence of adequate regulation over disinformation that jeopardizes societal security or impairs the public interest. A more meticulous and constitutionally grounded amendment to Article 47(1) is imperative, one that mandates penalties solely when specific harm is inflicted, rather than merely targeting those who disseminate false information. Nevertheless, considering the legislative backdrop of the Act, it is prudent to address disinformation through other pertinent internet-related legislation. In the subsequent sections, we will delve into the various legal frameworks within South Korea that hold potential for regulating disinformation.
10.3 Regulating Disinformation under the Current Law
In South Korea, intentional dissemination of falsehood is rigorously prohibited across a range of statutes. Three primary categories of current laws are commonly used to regulate various forms of disinformation. First, there are laws targeting disinformation specifically related to elections. The Public Official Election Act governs the dissemination of false information about political candidates and their families. Those who publish such incorrect information may face penalties, including imprisonment with labor or fines, under Article 250 of the Public Official Election Act. Similarly, the Political Parties Act penalizes individuals who publish or distribute falsehoods about candidates or their families in connection with an intraparty competitive election for selecting a political party representative (Article 52), as well as false registration applications of a party (Article 59).
Second, there are laws that counter disinformation that poses a threat to the security and stability of the nation. Given South Korea’s divided status and its ongoing situation with North Korea, the imperative to combat disinformation that threatens national security is of paramount importance. The National Security Act imposes punishments on members of anti-state organizations or persons under their command who fabricate or disseminate a false statement likely to disrupt social order (Article 4). In a similar vein, the Act penalizes members of anti-state organizations who fabricate or disseminate false statements concerning a matter that is likely to disrupt social order (Article 7).
Third, defamation laws can be employed to combat disinformation that harms the reputation of others. Defamation in South Korea falls under the purview of both criminal and civil law. Consequently, individuals who spread defamatory disinformation may face penalties under both legal systems. The Criminal Act addresses defamation in Article 307, distinguishing between penalties for ‘stating the truth’ (Article 307(1)) and ‘stating a falsehood’ (Article 307(2)). Additionally, those who publish false facts to defame the deceased are subject to punishment under Article 308. When defamatory disinformation is disseminated through mass media such as newspapers, magazines or radio with an intent to harm someone’s reputation, Article 309 of the Criminal Act prescribes even more severe punishment. Beyond criminal consequences, defamation is recognized as a tort under the Civil Act.
Especially within the context of internet defamation, the Information and Communications Network Act imposes enhanced penalties (Article 70).Footnote 9 Article 44-7 specifically addresses the dissemination of unlawful information, encompassing defamatory online content that is intended to tarnish the reputation of others. Defamation is granted immunity if the statement accused of being libelous is proven to be true and serves the public interest. Article 310 of the Criminal Act outlines that if the facts presented are verified as true and solely intended to serve the public interest, they shall not be subject to punishment. This truth-based defense does not necessitate absolute accuracy; it may encompass inaccuracies or hyperbolic expressions to afford greater leeway for freedom of speech and the press. In determining whether a statement of fact is false, the Supreme Court clarified that (1) it should be considered in its entirety and (2) if the statement aligns with objective facts in significant aspects, even if it contains minor discrepancies or slight exaggerations in detail, it should not be deemed a false statement of fact.Footnote 10 In addition, if a defamatory statement involves only opinion, it cannot be actionable.Footnote 11
When a speaker genuinely held a belief in the truth of their statement, even if later proven false, the speaker may be granted immunity if their belief was grounded in a ‘reasonable belief’ in the accuracy of the statement.Footnote 12 Hence, a statement made in the public interest, based on a reasonable belief in its truthfulness at the time of utterance, can still be shielded under the ‘reasonable belief’ standard. This standard strongly safeguards freedom of expression by affording protection to statements that, although ultimately proven false, were made with a genuine belief in their accuracy.Footnote 13 The standard of ‘reasonable belief’ applies to the media domain as well. In one case where a journalist was sued for defamation, the Supreme Court ruled that the journalist should be exempt from liability for defamation because he had ‘reasonable belief’ in his report as he tried to verify all the facts and contact witnesses in a criminal case.Footnote 14 Conversely, in another case, where a journalist conveyed a conviction without engaging in additional interviews with relevant people, the court denied his reasonable belief in the news report.Footnote 15
As the severity of penalties depends on the veracity of the statement, it is critical for the court to assess whether a statement is true or false in defamation cases. The Supreme Court has established that if the primary essence of an expression aligns with the objective facts, even minor discrepancies or slight exaggerations in the details do not render it false.Footnote 16 As long as the key element of the expression is true, any minor exaggerations or differences in how the factual relationship is portrayed should not lead to severe penalties for false expression.Footnote 17 The Supreme Court has ruled that when determining falsity, the overall impression should serve as the standard.
These criteria for evaluating truth are crucial when considering authenticity in the context of disinformation. In order for someone to be subject to penalties for disseminating false information, a significant portion of the content must be proven to be factually inaccurate, rather than merely constituting a minor error or exaggeration. Furthermore, even if the content is proven to be untrue, it may not lead to punishment if the person had a ‘reasonable belief’ in truth and the content is determined to contribute to the public interest.
In a case where several people were indicted for spreading false rumors about allegations of bribery and fraud involving competing candidates, the Court was tasked with determining whether they held a reasonable belief in the truthfulness of these rumors. The Seoul High Court emphasized the importance of fair elections through democratic procedures, which ‘establish the legitimacy of state power and foster the development of democratic politics’. The Court further emphasized that the propagation of ‘fake news’ that undermines these principles requires utmost attention.Footnote 18 Its judgment delineated that the defendants systemically and intentionally propagated ‘fake news’ through widely circulated media channels shortly prior to the election, with the aim of misleading voters regarding their choice of candidates. Given the gravity of their actions, the Court held that the defendants lacked a reasonable belief in the truth of their claims and must face severe punishment for defaming the victim. This case illustrates how spreading disinformation that tarnishes someone’s reputation can lead to serious legal consequences.
10.4 Making New Laws against Disinformation
The debate on regulating ‘fake news’ began in South Korea in early 2017, ahead of the nineteenth Korean presidential election. The fake news controversy surrounding the US presidential election in 2016 raised concerns that fake news could become prevalent in Korean presidential elections as well. In fact, former UN Secretary-General Ban Ki-moon, a prominent conservative presidential candidate at that time, announced his withdrawal from the race in February 2017, citing damage caused by fake news. During a subsequent meeting with the National Assembly, Ban highlighted his fake news cases to advocate for government regulation and legislation against fake news.
Since 2017, the National Assembly has introduced dozens of bills aimed at regulating fake news. There are three main types of legislation that have been proposed to regulate fake news. First, standalone bills have been put forward, focusing solely on the regulation of fake news. An example of such a bill is the Act on the Composition and Operation of the Fake News Countermeasures Committee.Footnote 19 This proposal suggests the establishment of such a committee under the Prime Minister’s authority to oversee fake news regulation. Additionally, it suggests designating the Ministry of Culture, Sports and Tourism as the primary agency responsible for preventing the dissemination of fake news in print and online newspapers, while assigning the Communications Commission with similar responsibilities for broadcasting and digital networks. The bill, aimed at tacking the problem of fake news, provides a definition for it as follows: ‘Fake news refers to information, whether distributed via print media, online newspapers, broadcasts, or communication networks, that is deliberately fabricated or manipulated for political or economic purposes, with the deliberate aim of misleading the public into believing it to be genuine news’.
Second, legislative efforts against disinformation have been made by proposing amendments to the Information and Communication Network Act, which serves as the overarching law governing online communication and safeguarding online users. Several proposed amendments to this Act assign responsibilities to platform operators.Footnote 20 These proposals involve monitoring disinformation, along with the requirement to promptly remove or temporarily take down such content. Penalty surcharges are also proposed to be levied upon platform operators in case of failure to fulfill these obligations. For instance, a bill from 2018Footnote 21 proposed that platform operators must undertake ‘necessary measures’ to remove or block fake news that harm others’ privacy and reputation. A bill from 2022Footnote 22 specified that in case an online content is requested to be removed, the platform operator has the option to request a review by the Online Dispute Resolution Committee.
Third, bills have been proposed to amend the Press Arbitration Act, aiming to establish heightened accountability for inaccurate reporting.Footnote 23 These bills are designed to ensure that traditional media outlets are held responsible for disseminating disinformation. Notably, some of these bills propose the imposition of punitive damages on media entities for the publication of false information. These bills to amend the Press Arbitration Act sparked considerable opposition from media companies and organizations, triggering social controversies.Footnote 24
The Press Arbitration Act stands out among press statutes worldwide.Footnote 25 This legislation, applied to news reports in print, broadcasting and on the Internet, affords individuals the right to request the correction of inaccuracies in factual news and to reply to factual claims within a news article if they have suffered harm due to such allegations. Furthermore, the law mandates the right to demand a follow-up story following a previous report about an individual involved in criminal proceedings, especially when the individual has been acquitted of criminal charges. Operating as a distinct form of media regulation, the Press Arbitration Act serves as an effective remedial system, facilitating the delicate balance between safeguarding press freedom and upholding an individual’s right to reputation and privacy.
The distinctive feature of the Press Arbitration Act lies in its provision for expedient and cost-effective redress for those who have suffered harm due to media content, enabling them to pursue resolution through press arbitration prior to resorting to the legal system. However, endeavors to regulate disinformation via the press arbitration mechanism underscore a prevailing societal distrust toward the Korean press. A longstanding history of ideological confrontations between conservative and liberal factions have given rise to a media landscape that is similarly divided along these lines. This polarization has bred a deep-seated skepticism regarding media coverage, especially when politicians, often engaged in fierce political struggles, denounce critical media as purveyors of fake news. The media itself is not safe from criticism for occasionally intervening in political battles through biased reporting.
Amidst this deeply divided context, the suggested revision to the Press Arbitration Act has emerged, aiming to establish punitive penalties for media outlets disseminating falsehoods. For instance, a bill proposed in February 2021Footnote 26 seeks to address false news content by categorizing it as fake news. Notably, Article 30(2) of this amendment introduces the concept of punitive damages, specifying that the amount of compensation should surpass the actual damage incurred if such damage stems from the public dissemination of erroneous or manipulated information through media reports, with the intention of defaming others. Moreover, the amendment incorporates a presumption of malicious intent on the part of the press.
Another billFootnote 27 introduces the provision that if a media entity inflicts harm upon an individual through the dissemination of false facts leading to defamation, the court holds the authority to grant damages up to three times the actual amount of harm incurred. The bill’s objective revolves around preventing the media from propagating fake news.
A proposed bill in June 2021Footnote 28 exhibits increased stringency compared to earlier amendments. Article 2(17) delineates ‘false or manipulative reporting’ as ‘the act of disseminating or conveying false facts or information manipulated to appear as facts through media outlets, Internet news services, or online multimedia broadcasts’. This definition has garnered criticism due to its exclusion of disinformation propagated on social media and platforms like YouTube. The amendment also provides for the victims to seek compensation ranging from no less than three times to no more than five times the amount of damage resulting from false reports by the news media (Article 30-2).
In sum, despite the introduction of numerous anti-fake news bills since 2017, none of the bills have successfully passed through the National Assembly. Many of these bills had to encounter challenges in defining the elusive concept of ‘fake news’, while others faced backlash for potentially infringing the freedom of expression online. The proposed amendments to the Press Arbitration Act were met with significant opposition from the media industry, as they appeared to target media outlets as primary purveyors of fake news. While the impact of fake news remains a concern, particularly with the advancement of artificial intelligence, the failure of fake news bills indicates that the task of regulating disinformation with legislation will likely continue to pose challenges in the future.
10.5 Digital Platforms as Gatekeepers against Disinformation
With the evolution of the Internet, various challenges have emerged in the realm of online communication. Digital platforms, which serve as intermediaries for online expression, have assumed the role of digital gatekeepers, entrusted with the duty and responsibility to address these issues. In the dynamic landscape of the digital world, these platforms have increasingly shouldered a rage of obligations.
The Information and Communication Network Act, serving as a fundamental statute governing internet regulation, is designed to offer online users an effective means of safeguarding their privacy and reputation. Article 44-2 of the Act mandates that the information service provider, when it receives a request from the alleged victim, must expeditiously delete or block the harmful contents and subsequently notify both the requester and the poster about the actions taken. Should the information service provider fail to remove harmful content from its platform, it can potentially face liability for defamation or invasion of privacy, in conjunction with the original poster. However, in cases where the information service provider can demonstrate a sincere effort to remove or block injurious content, the provider may be eligible for a reduction or even exemption from liability for damages arising from such content (Article 44-2(6)). Hence, this law creates a robust incentive for digital platforms to promptly remove harmful content.
In cases where digital platforms encounter challenges in ascertaining whether the information in question infringes upon the right of others, they are empowered to exercise discretion to restrict access to the information for a period of thirty days. The question of whether such takedown decisions by platforms may encroach upon freedom of online expression has been raised. Nonetheless, the Constitutional Court ruled that the temporary takedown process did not infringe upon online users’ freedom of expression.Footnote 29 The Court’s rationale for this decision was based on the rapid and extensive dissemination of harmful expression, where a temporary takedown was deemed the most effective means to prevent the propagation of defamatory content. The Constitutional Court further asserted that the significance of safeguarding reputation and privacy outweighed the complainant’s freedom of expression. In the Court’s view, the thirty-day period of temporary takedown was an appropriate means of protecting the reputation of a potential victim.
Moreover, the Supreme Court rendered a landmark decision regarding the liability of digital platforms in 2009. In the case of Kim v. NHN Corp.,Footnote 30 the Supreme Court established a key precedent by holding that digital intermediaries may be held liable even in instances where they had not been notified by the plaintiff about the existence of defamatory content. The case revolved around a plaintiff, Kim, who took legal action against three major Korean internet platforms on grounds of online defamation. Kim contended that these intermediaries had disseminated news articles alleging his betrayal of a pregnant ex-girlfriend. Additionally, Kim asserted that these intermediaries had facilitated various individual blogs where users disclosed his workplace and phone number, leading to relentless harassment. The intermediaries countered by asserting that they had simply distributed news articles originally published by legacy news media, and that they were bound by legal agreement with these media outlets not to modify news stories at their discretion.
The Supreme Court, however, rejected the intermediaries’ argument. It clarified that these intermediaries were more than mere search engines; they received news articles from various sources, stored this news in their database, curated stories for online publication, and ultimately shared these stories with the public. In light of their role as news aggregators, the Court held that these intermediaries should be held accountable akin to offline news media that had initially published the contentious articles.Footnote 31
In the current legal landscape, as shown above, existing Korean laws and precedents establish the legal responsibility of digital platforms for online content. This led to a heightened emphasis on platform operators to take on the role of regulating disinformation. An illustrative example is the proposed amendment to the Information and Communications Network Act on 22 June 2020.Footnote 32 The amendment provides a definition of disinformation as ‘false information presented in the guise of a journalistic report, crafted with the intent to mislead recipients through a communication network’. In line with this amendment, platform providers are bound by an obligation to promptly remove such disinformation. Failure to fulfill this duty may result in a penalty of up to 30 million Won. This legal framework signifies a robust effort to empower operators with the responsibility to curtail the spread of disinformation, underscoring the importance of accountability and accuracy in online communication.
Another proposed amendmentFootnote 33 to the Information and Communications Network Act incorporates the term ‘fake news’. According to this amendment, fake news constitutes ‘false or distorted information with the aim of achieving political or economic gains and presented in a manner that resembles a news report’. This definition underscores the deceptive nature of such content, highlighting its potential to mislead the public while masquerading as legitimate news coverage. Under this provision, if adopted, platform providers must continuously monitor content to identify instances of fake news. Upon receiving a request to remove fake news or identifying it through monitoring, the proposed statute would obligate platform providers to remove such false information. The amendment further establishes penalties for noncompliance with the obligation of continuous monitoring. Should platform providers fail to fulfill this obligation, their employees and corporate officers would potentially be subject to imprisonment for up to one year or a fine of up to 10 million Won.
There are several other amendments, but they all suffer from a similar set of challenges. First, the proposed amendments lack clarity regarding their definition of ‘fake news’ or ‘disinformation’. In order to comply fully with the governing Constitutional Court precedents on freedom of expression and the press, legislation aimed at penalizing publishers and disseminators of fake news must provide explicit and specific guidelines, allowing citizens and platform providers alike to anticipate which actions are potentially unlawful. However, most bills create ambiguity that hinders individuals from comprehending the boundaries of prohibited conduct. Particularly concerning is the difficulty in determining the threshold for categorizing information as ‘false’. Indeed, court precedents have illustrated that the concept of truth encompasses even minor inaccuracies and nuances, making it complicated for the general public to gauge the threshold at which misinformation state authorities would deem to be ‘false’. Consequently, defining the precise boundaries of ‘fake news’ and ‘disinformation’ within the framework of the law becomes a formidable challenge, as attempting to encompass all forms of erroneous information under the label of fake news is impractical.
Second, obligating platform providers to monitor and take down fake news raises concerns about potential encroachments on online freedom of expression. Granting platform providers unchecked authority for private censorship, devoid of clear guidelines, jeopardizes the integrity of the online environment. Many of these legislative proposals entail substantial fines for platforms failing to remove flagged false posts swiftly. Consequently, platform providers may opt for a conservative approach by removing or blocking flagged content without comprehensive verification, thereby posing a substantial risk to online freedom of expression. To mitigate this, it is imperative that the law not only authorize platforms to engage in monitoring but also establishes a structured framework encompassing criteria and protocols for identifying disinformation. Procedural guidelines would also be necessary to permit someone accused of posting disinformation or misinformation to contest this characterization meaningfully. Further, ensuring transparency in the determination of disinformation is essential to strike a balance between combating fake news and protecting freedom of expression (as well as other forms of expressive freedom).
Third, excessively stringent penalties for fake news creators and disseminators also present a constitutional problem. Several legislative proposals punish people for generating or spreading fake news, but this potentially infringes upon the ‘principle of proportionality’.Footnote 34 As previously explained, the propagation of falsehoods is subject to aggravated punishment within the confines of defamation laws or the Information and Communication Network Act. Under these laws, the imposition of sanctions for falsehood should be limited to cases where tangible rights such as reputation or privacy have been violated. Therefore, it is inappropriate to impose penalties solely on the grounds of crafting and then disseminating false information. Instead, to avoid potential constitutional objections, a regulation that levies sanctions or imposes financial liability for damages may constitutionally do so only when the intentional creation or dissemination of falsehood actually results in demonstrable harm to others.
10.6 Limitations of Self-Regulation
Pure self-regulation, which grants operators autonomy at all stages from rule-making to post-management, remains an ideal model but faces challenges in practical implementation. The European Union has introduced a ‘regulated self-regulation’ or ‘co-regulation’ approach that enhances the efficacy of self-regulation. This is achieved by providing a foundational framework for platform operators’ self-regulation through legal and institutional mechanisms, as exemplified by the enactment of digital-related laws like the Digital Service Act.Footnote 35 These legislative measures render self-regulation more enforceable. The Information and Communication Network Act mandates that platform operators establish codes of conduct and self-regulatory guidelines to ensure user protection and provide safe and reliable communication service (Article 44-4). In accordance with this law, self-regulation is mainly overseen by the Korea Internet Self-Governance Organization (KISO), a consortium consisting of sixteen platform companies, including major players such as Naver and Kakao.
In 2018, the KISO introduced a self-regulatory policy targeting ‘fake news presented in the format of news reports’, known as ‘False Posting Policy in the Form of Media Reports’.Footnote 36 Amidst the COVID-19 pandemic, the KISO also devised and put into action a policy aimed at addressing misleading information related to the treatment and prevention of and vaccination against COVID-19.Footnote 37 This policy involves appropriate actions, including content removal, in cases where posts are definitely identified as false and manipulative information based on official announcements from authoritative bodies such as the World Health Organization or the Center for Disease Control and Prevention. Nevertheless, due to the fact that the KISO’s activities are initiated based on requests from its member platform companies, the scope of posts subject to moderation remains limited. Consequently, although the KISO operates effectively as a self-regulatory mechanism, its impact in countering disinformation is somewhat limited. Overseas platforms such as Google, Meta and TikTok have not joined the KISO, preventing a collective effort to combat disinformation on YouTube and Facebook and further limiting the efficacy of KISO’s self-regulation efforts.
Under existing legislation, platform operators bear the responsibility of independently blocking illicit posts. Article 44-3 of the Information and Communications Network Act provides platform operators with the discretion to temporarily block posts for a maximum of thirty days. Platform operators may employ this measure when they determine that the information being disseminated on their platforms violates the rights of others, such as privacy infringement or defamation. However, in practice, this provision is infrequently invoked due to the considerable difficulty faced by platform operators in discerning whether a post genuinely infringes upon those rights. Although platform operators possess the technical and economic capacity to function as administrators of internet communication, imposing the exclusive duty of moderation upon them could potentially result in an excessive number of takedowns or disregard for harmful content. Although promoting platform operators’ self-regulation constitutes a laudable policy, the devil lies in the detail. Establishing clearer legal norms would provide more effective protection against the social harms caused by disinformation and misinformation. Moreover, better and more transparent fundamental regulatory frameworks and procedures would also help to ensure the effective implementation of these mandatory moderation policies.
10.7 Conclusion
Disinformation that is skillfully crafted to appear authentic can have dire consequences for democracy. It can infringe upon individual rights and cloud the rational judgment of citizens on crucial political matters as well. Furthermore, the term ‘fake news’ has cast a shadow on media credibility by associating it with ‘news’, thereby eroding trust in traditional journalism and undermining its very foundation. These issues have fueled the call for regulating fake news – that is, disinformation – promoting global efforts to devise legal remedies. South Korea also has been attacked by fake news, which raised concerns during the presidential election and escalated in severity during the COVID-19 pandemic.
In South Korea, where freedom of speech does not hold the same prominent position as it does in the United States,Footnote 38 a range of bills aimed at amending defamation law, internet law and press arbitration law have been employed to combat the dissemination of disinformation. However, South Korea’s endeavors to combat fake news through legislative means have proven to be less effective than anticipated, leading to increased social controversy. This highlights that the creation of new laws targeting fake news may not be the optimal solution. Instead of attempting to delineate a clear boundary between truth and falsehood under the banner of ‘fake news’, it is crucial to establish a transparent and inclusive mechanism for discerning authenticity. Strengthening the self-regulation of platform operators and providing a structured framework for self-regulation by government are pivotal steps in this direction. As digital technology and artificial intelligence continue to advance, the demand for regulation becomes even more pronounced.
11.1 Introduction
In April 2023, the Government of India amended a set of regulations called the Information Technology Rules, which primarily dealt with issues around online intermediary liability and safe harbour.Footnote 1 Until 2023, these rules required online intermediaries to take all reasonable efforts to ensure that ‘fake, false or misleading’ information was not published on their platforms.Footnote 2 Previous iterations of these rules had already been challenged before the Indian courts for imposing a disproportionate burden on intermediaries, and having the effect of chilling online speech.Footnote 3 Now, the 2023 Amendment went even further: it introduced an entity called a ‘Fact Check Unit’, to be created by the government. This government-created unit would flag information that – in its view – was ‘fake, false or misleading’ with respect to ‘the business of the central government’.Footnote 4 Online intermediaries were then obligated to make reasonable efforts to ensure that any such flagged information would not be on their platforms. In practical terms, what this meant was that if intermediaries did not take down flagged speech, they risked losing their safe harbour (guaranteed under the Information Technology ActFootnote 5).
The 2023 Amendment was immediately challenged before the High Court of Bombay, and at the time of writing, hearings were in progress. A perusal of the pleadings in the case reveals that the state’s defence of the 2023 Amendment rested on a few important aspects.Footnote 6 First, the state highlighted the dangers of online ‘fake news’, its threat to public order, and that the ‘virality’ of the Internet made counter-speech an unviable solution. Second, as a matter of constitutional law, the state argued that ‘fake’ or ‘false’ news was unprotected by the constitutional guarantee of freedom of speech and expression.Footnote 7 Citing US Supreme Court judgments such as Hustler Magazine v. Falwell, it noted that fake news had no ‘constitutional value’, and therefore did not deserve constitutional protection. And third, it argued that the spread of fake news undermined other constitutional rights, such as the right to informationFootnote 8 and – by extension – the right to vote (which the state read as a right to vote based on true and accurate information). This, in turn, allowed the regulation of online fake news in order to preserve other equally important constitutional rights.
The state’s arguments in the IT Rules challenge reveal an inherent tension within Indian free speech jurisprudence. In this chapter, and keeping the IT Rules challenge as the background framework, I will excavate this tension, and tease out its implications for the regulation of misinformation and disinformation in the contemporary context. Broadly, this tension is manifested in two lines of jurisprudence. The first line can be broadly characterised as the ‘autonomy’ approach to free speech. Classically liberal in orientation – and drawing from US First Amendment jurisprudence, although not as expansive as in the USA – the ‘autonomy’ approach advocates for a degree of content-neutrality in restrictions to free speech. Its underlying principle is that autonomous individuals decide for themselves how to receive and respond to speech. The second line can be called the ‘perfectionist’ approach to free speech (I use ‘perfectionist’ in the sense that it is used by political philosophers, such as Joseph RazFootnote 9). The perfectionist approach as it appears in Indian jurisprudence eschews content neutrality, links the degree and nature of constitutional protection that speech has to its ‘value’, and is comfortable with the state being the arbiter of value. While these two lines of jurisprudence have proceeded on an almost parallel track in the seven decades of constitutional history, it is in cases such as the IT Rules challenge – I will argue – that they come to a head; and it is in contexts such as that of contemporary online disinformation that both lines reveal their limitations, and demonstrate the need for a new way of thinking about the relationship between speech, misinformation and the Constitution.
11.2 The Autonomy Line
Article 19(1)(a) of the Indian Constitution guarantees to all citizens the right to freedom of speech and expression. Article 19(2) authorises the state to make, by law, ‘reasonable restrictions’ upon the exercise of this freedom, ‘in the interests’ of eight different categories. These include, for example, ‘public order’, ‘defamation’, ‘incitement to an offence’ and so on.Footnote 10 The structure of India’s free speech clause largely resembles the familiar, two-tiered structures that we see in many constitutions: the declaration of the right, followed by an exhaustive list of circumstances under which it may be restricted.Footnote 11 This approach precludes ‘content neutrality’ in the US First Amendment sense, as Article 19(2) itself authorises the restriction of speech on the basis of its content. The proof of the pudding, however, is in the interpretation.
The debates of the Constituent Assembly reveal the tension underlying the free speech clause right from the outset: liberal arguments in favour of minimising the restrictions clause were met with perfectionist arguments extoling the expertise and legitimacy of the newly elected government.Footnote 12 This internal tension has then spilled over into the judicial interpretation of the free speech clause.
Right from the beginning, a set of Indian judgments cited classic liberal arguments – from John Stuart Mill to Oliver Wendell Holmes’ ‘marketplace of ideas’ – as the underlying foundation of the free speech clause. This is most clearly visible in the evolution of Supreme Court jurisprudence on socially subversive, or dissident, speech. In 1960, the Supreme Court set aside the criminal conviction of a politician who had been prosecuted for exhorting citizens not to pay a tax that he considered unjust.Footnote 13 Responding to the state’s arguments that incitement to disobedience of a law could spark a general revolution, the Court insisted on a ‘proximate link’ between speech and the feared public disorder before it could be restricted or criminalised.Footnote 14 A few years later, the Court used the colourful analogy of a ‘spark in a powder keg’ to describe the degree of proximity that was required between speech and public disorder.Footnote 15 Most recently, while considering certain broadly worded provisions of the Information Technology Act, the Court drew a distinction between ‘advocacy’ and ‘incitement’, holding that only the latter could be criminalised consistent with the free speech clause of the Constitution.Footnote 16
The distinction between ‘advocacy’ and ‘incitement’, in particular, is a clear statement of the autonomy approach to free speech. It brings to mind Justice Brandeis’ famous observation that as long as there is time to respond, the remedy to subversive speech is counter-speech, and not censorship.Footnote 17 The idea underlying this is that the default presumption is that the autonomous listener is responsible for choosing how to take – and respond – to speech, except in narrowly defined cases of diminished autonomy.Footnote 18 The classic example given is of shouting ‘fire’ in a crowded theatre, a situation where the listeners have no time to evaluate the truth of the speech but must act immediately. The requirement of imminence – the Brandenburg test, as it is known under First Amendment jurisprudenceFootnote 19 – broadly tracks this underlying theory, limiting itself to situations (for example) such as inciting an already enraged mob to commit acts of direct violence.Footnote 20
The autonomy approach to free speech is also evident in other aspects of Indian free speech jurisprudence: in particular, in cases dealing with the question of whether some forms of speech are inherently more ‘valuable’ than others. In a series of early cases, for example, the Supreme Court beat back the government’s attempts to curtail the volume of advertisements in newspapers by drawing a direct link between advertisements, income flow, volume of circulation and – thereby – the newspapers’ right to free speech.Footnote 21 Another example is that of the ‘commercial speech doctrine’ issue, where the Court departed from earlier jurisprudence to hold that commercial speech was as much protected by Article 19(1)(a) as any other form of speech.Footnote 22
This vein of Supreme Court jurisprudence is what the petitioners in the IT Rules case tapped into in order to rebut the state’s argument that ‘fake’ or ‘false’ speech is excluded from Article 19(1)(a) protection altogether, because it possesses low or no value.Footnote 23 The autonomy approach to free speech – which regards listeners as autonomous individuals, responsible for their own choices – precludes a priori assessments of value, and any attempt to link constitutional protection to the value of speech, other than that which is already in the Constitution. The basic idea is that judgements of value are to be made by the listeners and not by the state. The structure of Article 19(1)(a) and (2), which specifically sets out the kinds of content that can be regulated, appears to support arguments against additional value judgements being made by the state: after all, the argument goes, those value judgements have already been made – exhaustively – within the Constitution itself. And false speech – for both principled and pragmatic reasons – is not one of the categories under Article 19(2).
11.3 The Perfectionist Line
However, even as early Supreme Court judgments were entrenching the autonomy approach to free speech, there were other judgments that took another line, and accentuated the tension inherent within the interpretation of Articles 19(1)(a) and 19(2). Early on, when considering a ban on misleading medicinal advertisements under a law evocatively titled the Drugs and Magical Remedies Act, the Supreme Court upheld the ban by noting that advertisements for magical remedies were unprotected by the free speech guarantee because they made no contribution to the exchange of ideas – political, economic or cultural – that constituted the basis for why there existed a free speech guarantee in the first place.Footnote 24 Here, the Supreme Court constructed a hierarchy between kinds of speech, assigning them relative value based on their ‘contribution’ to the democratic public sphere, and linking the degree or nature of constitutional protection with the value they had. Other than this, an unarticulated major premise of the judgment, of course, was that the target audience of misleading information would not be in a position to sift the wheat from the chaff, or be able to identify the misleading content in medicinal advertisementsFootnote 25 – something that justified state intervention in this domain.
Similar reasoning can be glimpsed in the evolution of the Court’s jurisprudence on obscenity. In its early days, the Court adopted the Victorian British ‘Hicklin test’, which asked whether speech had the tendency to ‘deprave or corrupt’ the minds of those into whose hands it might fall.Footnote 26 Over the course of time, this test was gradually liberalised, until in 2011, the Court replaced it with the US Miller test, where the focus is on whether the offending work appeals solely to the ‘prurient interest’.Footnote 27 However, whether the more strait-laced Hicklin or the more liberal Miller, what unites them both is the belief that certain kinds of speech are inherently valueless. This is made clear by the fact that both judgments – as well as the obscenity provision in India’s Penal CodeFootnote 28 – have exceptions for works with genuine ‘literary’ or ‘cultural’ value.Footnote 29 Once again, therefore, in the realms of obscenity jurisprudence, we have a judgement – made by the state, and subject to judicial review by the Courts – about the value of forms of speech, and linking up perceived value with constitutional protection.
The perfectionist impulse is most strikingly visible in a judgment called Union of India v. Motion Picture Association, where the Court was considering a state requirement for cinemas to show short ‘educational’ or ‘cultural’ documentaries before the start of the film.Footnote 30 This was challenged on grounds of being akin to a ‘must-carry’ provision, tantamount to compelled speech. The Supreme Court’s response is instructive, and merits quoting in some detail:
However, whether compelled speech will or will not amount to a violation of the freedom of speech and expression, will depend on the nature of a ‘must carry’ provision. If a ‘must carry’ provision furthers informed decision-making which is the essence of the fight to free speech and expression, it will not amount to any violation of the fundamental freedom of speech and expression. If, however, such a provision compels a person to carry out propaganda or project a partisan or distorted point of view, contrary to his wish, it may amount to a restraint on his freedom of speech and expression. To give an example, at times a statute imposes an obligation to print certain information in public interest. Any food product must carry on its package the list of ingredients used in its preparation, or must print its weight. These are beneficial ‘must carry’ provisions meant to inform the public about the correct quantity and contents of the product it buys. It enables the public to decide on a correct basis whether a particular product should or should not be used. Cigarette cartons are required to carry a statutory warning that cigarette smoking is harmful to health. This is undoubtedly a ‘must carry’ provision or compelled speech. Nevertheless, it is meant to further the basic purpose of imparting relevant information which will enable a user to make a correct decision as to whether he should smoke a cigarette or not, such mandatory provisions, although they compel speech, cannot be viewed as a restraint on the freedom of speech and expression.Footnote 31
And:
We have to examine whether the purpose of compulsory speech in the impugned provisions is to promote the fundamental freedom of speech and expression and dissemination of ideas, or whether it is to restrain this freedom, the social context of any such legislation cannot be ignored. When a substantially significant population body is illiterate or does not have easy access to ideas or information, it is important that all available means of communication, particularly audiovisual communication, are utilised not just for entertainment but also for education, information, propagation of scientific ideas and the like. The best way by which ideas can reach this large body of uneducated people is through the entertainment channel which is watched by all – literate and illiterate alike; to earmark a small portion of time of this entertainment medium for the purpose of showing scientific, educational or documentary films, or for showing news films has to be looked at in this context of promoting dissemination of ideas, information and knowledge to the masses so that there may be an informed debate and decision making on public issues. Clearly, the impugned provisions are designed to further free speech and expression and not to curtail it.Footnote 32
The excerpted passages are important because they go to the heart of the perfectionist approach to free speech. The Court here drew a clear-cut distinction between ‘propaganda’ or ‘partisan speech’ or a ‘distorted point of view’ and (presumably accurate) information that ‘furthered informed decision-making’. A must-carry rule that required compulsory broadcasting of the former would violate the right to freedom of speech and expression, but a requirement of carrying the latter would not, since its net effect was to further the right by providing information to all, rather than curtailing it.
It is particularly instructive that the Court drew a link between consumer warnings on cigarette packs (a classic example, as indicated above, in the consumer fraud domain, where concerns of autonomy are thought to be of relatively low intensity), with the domain of political speech, where the concerns of autonomy are believed to be at their highest. The overarching theme, then, cuts across domains of speech: the perfectionist approach to free speech, which sees the role of free speech in a democracy as performing a very specific function, and accords constitutional protection to only that kind of speech that is consistent with this function. This necessarily requires the state – and the courts – to construct an internal content-based hierarchy within speech, and to assign social value to forms or kinds of speech.
This, it must be noted, is distinct from a free speech jurisprudence where regulations or restrictions are justified not on the basis of the value of the speech in question but by virtue of its impact on other constitutional principles. Hate speech jurisprudence, for example, is based not on the premise that hate speech is low or no-value speech, but that it is destructive of the constitutional principles of equality and equal protection.Footnote 33 This jurisprudence is also at odds with a strict articulation of the autonomy approach to free speech.Footnote 34 But at the same time, liberal democracies have been broadly able to reconcile the existence of hate speech laws with a general autonomy-based approach to free speech: there is an understanding that the principles underlying the free speech guarantee need to be balanced against the maintenance of other constitutional principles, which – in turn – will require overriding autonomy in select domains, without conceding its own importance as a constitutional principle.Footnote 35
What is crucial to note is that such arguments do not depend – as already pointed out – on assigning values to forms of speech. The criminalisation or penalisation of ‘misinformation’ or ‘disinformation’ simpliciter, however, raises different concerns. For instance, in the IT Rules case, had the clause referred to ‘fake or false speech’ that ‘incites public disorder’, or ‘fake or false speech’ that ‘vilifies a community of people’, different constitutional questions would have arisen. By contrast, penalising misinformation or disinformation simpliciter is – in effect – penalising bare falsehood. To defend such a provision – which, on its face, falls outside the eight sub-clauses of Article 19(2) – would require the state to construct an argument along the lines of low- and high-value speech, as the only other option is to argue that false speech falls out of the ambit of the free speech clause altogether. As we have seen, this is indeed what the state did, and how it chose to defend the constitutionality of the 2023 IT Rules Amendment when it was challenged in court.
And it is this defence that lays bare the tension between the autonomy and perfectionist approaches to free speech, which have pulled Indian free speech jurisprudence in different directions from the inception of the Indian Constitution. As we have seen, it is in the context of contemporary online misinformation and disinformation – with its virality, and its admittedly corrosive effects on democracy – that these tensions become particularly stark. And in the final section, we shall examine how these tensions play out in this context.
11.4 Misinformation and Disinformation in Contemporary Contexts
As pointed out, the state’s core argument in the IT Rules case was that outright falsehoods are unprotected by the constitutional guarantee of free speech, as false speech has no constitutional value. On the flip side, the state argued that citizens have a right to receive true and accurate information, a right that is frustrated in an online environment awash with falsehoods. In particular, the state linked this right to the right to vote: it argued that false or fake news distorted the voting process – and thereby democracy – by distorting the base of information upon which voters made their decisions.Footnote 36 (This last argument, one will notice, specifically takes issue with the autonomy approach to free speech.) And finally, it flagged the specific character of the Internet and online speech: in particular, virality, which makes it difficult for false speech to be ‘countered’ with facts.
Taken on their own, these arguments do not necessarily sound unreasonable. The constitutional issue, however, lies in the regulatory mechanism that the state devised to deal with the problem: a regulatory mechanism that involved a government-mandated fact-check unit ‘checking’ speech about the government, and backing it up with the coercive consequences of losing safe harbour. To elaborate, safe harbour – recognised by Section 79 of the Indian Information Technology Act – refers to the legal position where intermediaries cannot be held liable for the content that is posted or disseminated on their platforms. Safe harbour is normally a qualified right, imposing certain obligations of due diligence upon the intermediary. In India, this takes the form of ‘actual knowledge’ – that is, the intermediary needs to be notified, either by a court order or a government notice, that it is hosting unlawful content. If, on receiving such knowledge, the intermediary refuses to act, it then ‘loses’ safe harbour, and is open to lawsuits seeking to hold it liable of the offending speech.
Deprivation of safe harbour is therefore not direct censorship, such as a book ban or the blocking of a website. Theoretically, an intermediary can disagree with the government on the issue of whether flagged content is actually lawful or unlawful, defend its assessment in court, and even possibly win. Practically, however, safe harbour is a vital safeguard that allows intermediaries to perform their functions without the constant threat of multiple lawsuits hanging over their heads. An intermediary will – in its own interests – prefer to comply with a government directive rather than lose safe harbour. The coercion is therefore indirect – a threat of the loss of safe harbour and the consequences that follow – and the free speech implications lie in accompanying self-censorship and the chilling effect.
This detour is important, as it was the basis of the petitioners’ argument that the language of the 2023 IT Amendment creates a slippage between regulating misinformation and making the state the arbiter of truth. The possibility of such a slippage is always present in state efforts to tackle misinformation, and – the petitioners argued – the 2023 Amendment reveals how it actually happens. If the consequence of an intermediary not complying with the government Fact Check Unit is the loss of safe harbour, then – as far as the intermediaries are concerned (and by extension, the users of the platform, both the speaker and the recipient of the speech) – the government Fact Check Unit’s assessment of what is true or false is effectively determinative.
This leads to a few problems. The most obvious one, of course, is whether a free speech guarantee is consistent with allowing the state to act as an effective arbiter of truth, even if one were to accept the proposition that false speech is outside the pale of constitutional protection. It is, after all, one thing to say that false speech should be regulated, and another thing to say that the state (and in this case, specifically, the political executive) should be the body that has control over regulation. While the perfectionist line of free speech jurisprudence in India might well support the first proposition, the second presents a greater challenge. The autonomy approach, on the other hand, would reject out of hand the regulation of bare falsehood altogether, as – in the absence of any legally cognisable harm traceable to Article 19(2) of the Constitution – its rationale can be nothing other than an internal hierarchy within Article 19(1)(a).
Secondly – as petitioners’ counsel pointed out – there is a range of statements and forms of speech where the question is not merely whether something is true or false, but whether it falls within the true/false binary at all. For example, consider the judgment of the South African Constitutional Court in DA v. ANC. The case involved a bulk SMS sent by the main opposition party, the Democratic Alliance, which stated that an impartial report ‘shows how [President] Zuma stole your money’ to build his home, whereas the report had not made any finding of ‘theft’ against President Zuma.Footnote 37 Justices of the constitutional court split over the exact meaning of the phrases ‘shows how’ and ‘stole’, with some judges holding that it was an allegation of theft (and therefore, false), while others held that it was an accusation of unethical behaviour (and therefore neither true nor false, only convincing or unconvincing). As the case reveals, therefore, there is a subjectivity underlying not merely whether a claim is true or false, but the nature of the claim itself.Footnote 38
Although this issue would dog any regulatory attempt to curtail misinformation or disinformation online, it acquired a particular salience in the IT Rules case, as the regulatory form was one that proposed to give the executive the effectively determinative power of settling such questions. But although the issues and arguments arose in a specific context, it is evident that they need to be considered by regulators grappling with these questions across the board.
The IT Rules case is complicated by the tensions between the autonomy approach and the perfectionist approach within Indian free speech jurisprudence (by now, it should be obvious that this tension exists in the constitutional jurisprudence of most liberal democracies). It does appear – for the reasons pointed out above – that the IT Rules, in their specific form, go even beyond what is permitted by the perfectionist approach.
At the same time, it also appears that a strict application of the autonomy approach is not entirely appropriate to the era of mass online misinformation, with its well-known corrosive effects on democracy. As the scholarship around hate speech has taught us, it is not only in the Brandeis-type situations of incitement, or shouting ‘fire’ in a crowded theatre, that the remedy of counter-speech fails. There are other reasons – and other contexts in which – counter-speech can fail. This is especially true when one accounts for – in the words of Jack Balkin – the ‘infrastructure’ of free speech, and recognises the fact of unequal access to infrastructure determining who can speak, how much and to what extent.Footnote 39 The question of virality on the Internet – and specifically virality of misinformation on the Internet – often tracks inequalities of power and resources in the offline world, which are transplanted into the online domain.
Thus, neither the autonomy approach nor the perfectionist approach – the two strands of Indian free speech jurisprudence – might give us all the adjudicatory resources we need to fit the regulation of online disinformation and misinformation within the contours of constitutional jurisprudence. However, there are other, promising avenues: for example, in a one-off judgment, the Supreme Court has recognised a right of reply as forming a part of the right to free speech;Footnote 40 another has recognised the importance of access to the infrastructure of speech.Footnote 41 How this might work in the online domain remains an open question – for instance, the success of Twitter (now X) ‘Community Notes’ – as a community-moderated reply – is much debated. What is clear, however, is that any attempt must draw both from the autonomy-respecting strand of Indian free speech jurisprudence, while also taking seriously the problem of online misinformation, in shaping the nature and the form of regulatory remedies. The IT Rules perhaps are an example of how not to do it: that is, by eschewing autonomy entirely, taking the perfectionist approach to its extreme, and opting to combat misinformation by making the state the arbiter of what is misinformation and what is not.
12.1 Introduction and the African Context
This chapter reviews the regulation of disinformation from an African human rights’ law perspective, focusing on the right to freedom of expression and the right to vote. It provides an overview of the African regional law framework, specifically the African Charter on Human and Peoples Rights of 1981 (the African Charter)Footnote 1 and corresponding jurisprudence.Footnote 2 The chapter also analyses the way in which freedom of expression and disinformation laws have been applied in African countries, the aim being to contextualize and illustrate how African regional law plays out at the domestic level, but with an emphasis on the position in South Africa.
The African treatment of disinformation makes a valuable contribution to a book addressing the relationship between the regulation of freedom of expression, specifically disinformation, and the promotion of the democratic state. There are two key reasons. Firstly, most research on the impact of ‘fake news’ incidents on democracies has been conducted in the Global North, with few case studies considering the situation in countries positioned in the Global South. This is so, even though (a) disinformation as a concept has existed for years in the Global South and (b) credit was taken by former United States’ president, Donald Trump, for coining the term ‘fake news’.Footnote 3
Secondly, many African democracies are in a precarious position, with some governments implementing multifaceted strategies to silence their political opponents or critics, especially during elections. These include laws criminalizing the publication of false news, the filtering of online content and internet shutdowns.Footnote 4 The consequences of such strategies for democracies are dire: they undermine the rule of law, transparency, accountability, the right of citizens to receive information and the electoral processes.
Thirdly, and in juxtaposition to the use of censorship to undermine the spread of information, political campaigners in numerous African states have discovered how the freedom of the Internet can be used to create and disseminate false news and divisive digital content (through inter alia algorithms, bots, fake social media accounts, image manipulation and even the broadcast of fake information designed to appear as emanating from legitimate and reputable service broadcasters, such as the BBC).Footnote 5 Such disinformation is usually intended to distort political debate, silence opponents and win votes.Footnote 6
The South African regulation of disinformation is also insightful. South Africa’s political history makes for a fascinating country study, given its negotiated transition from a colonized and racist state, with strict censorship laws, to a constitutional democracy, based on human dignity, equality and freedom. The Bill of Rights in South Africa’s ConstitutionFootnote 7 protects both the right to freedom of expressionFootnote 8 and the right to access information.Footnote 9 Political rights are entrenched in Section 19 of the Constitution, giving every citizen the freedom to make political choices and the right to free, fair and regular elections. Section 19 should be read with Section 1 of the Constitution,Footnote 10 which provides that South Africa is ‘one, sovereign, democratic state’ founded upon listed constitutional values, with Section 1(d) recording that the constitutional democracy is based on ‘universal adult suffrage, a national common voters roll, regular elections and a multi-party system of democratic government, to ensure accountability, responsiveness and openness’.Footnote 11 This means that, in the South African context, where the spread of disinformation is often linked to political control, a delicate balancing of competing rights is required to ensure that the regulation of disinformation does not unjustifiably violate the right to freedom of expression.
The stability of the South African democracy and a concomitant free press and media, which have been instrumental in uncovering widespread state corruption,Footnote 12 have however been undermined by challenges aimed at stifling freedom of expression.Footnote 13 Examples include a state-proposed Media Appeals Tribunal,Footnote 14 stricter state control over the flow of information through a proposed Bill on the Protection of State Information,Footnote 15 the spreading of false information and propaganda whilst electioneering,Footnote 16 and threats and attacks on journalists during election campaigns.Footnote 17 Additionally, incidents of extensive media censorship were exposed in the Judicial Commission of Inquiry into allegations of state capture, corruption and fraud in the public sector including organs of state (the ‘Zondo Commission’), convened at huge state expense between 2018 and 2022.Footnote 18
Even more intriguing is that one of the most prominent global examples of the spread of false information, the ‘Bell Pottinger affair’, occurred in South Africa in 2016.Footnote 19 It involved a British public relations firm in a ‘large-scale fake news propaganda war’,Footnote 20 designed as a clandestine campaign to perpetuate racial polarisation in South Africa. It was really aimed, however, at influencing public opinion and discrediting detractors of former South African President Jacob Zuma, threatening to expose his corrupt relationship with the Gupta family (the instigators of state capture).Footnote 21 Disinformation was spread through media outlets owned by the Guptas,Footnote 22 fake blogs and tweets (amplified by the use of bots which generated automated retweets),Footnote 23 catchphrase hashtags designed to undermine critics (such as #PravinMustGo and #WhiteMonopolyCapital),Footnote 24 and the manipulation of social media sites, including Facebook and Wikipedia. Journalists who exposed the corrupt relationship between Zuma and the Guptas were accused of being lackeys of ‘white monopoly capital’ and were repeatedly threatened.Footnote 25
Having sketched this contextual background, the chapter commences with a brief description of the definitional terms used and then links the regulation of misinformation to the rationales for the protection of freedom of expression. Thereafter, the way in which African regional law protects freedom of expression and access to information is examined, including recent developments by institutions operating at the African regional level. This is followed by examples of reported incidents of the spread of disinformation and censorship in African countries, usually occurring during election time, and a description of the type of false news laws applying in Africa. Finally, the South African approach to the promotion of freedom of expression is analysed, focusing on disinformation, which is illustrated through the normative human rights lens of freedom of expression, free and fair elections, government accountability and the promotion of democratic values.
12.2 Definitional Concepts
Any chapter in a book dedicated to the protection of freedom of expression and disinformation should use the correct nomenclature. So, whilst the chapter refers occasionally to the catchphrases ‘false news’ and ‘fake news’, the term ‘disinformation’ is preferred. Disinformation is defined as information that is verifiably false or misleading and is created and disseminated with the intention of causing public harm.Footnote 26 In the African context, public harm constitutes mainly threats to democratic principles, the undermining of free and fair elections, plus political and policy-making processes.Footnote 27 It also includes the undermining of pluralism and diversity in democratic societies, usually through the spread of vitriolic messages intended to exploit divisions in society and to subordinate vulnerable groups, based inter alia on race, ethnicity, social origin and religion.Footnote 28
‘Disinformation’ does not include information that is false, but which was not created and distributed with the intention of causing harm. Instead, this type of information is usually labelled ‘misinformation’ and includes reporting errors or news and commentary that is identified as being partisan to a particular viewpoint or political party.Footnote 29 Although there is no consensus on the correct legal meaning of fake or false news,Footnote 30 the terms refer ‘to false information that mimics news media content in order to deceive so as to influence various reactions from the public’.Footnote 31 The terms are broadly used and include hoaxes, the manipulation of photos, clickbait campaigns, deliberate political disinformation campaigns, propaganda and even genuine political satire and parodies.Footnote 32
The discussion now moves to the traditional rationales justifying the protection of freedom of expression and whether they permit the regulation of ‘false news’ and/or disinformation. It will be shown that African countries tend to regulate the dissemination of false news in overly broad terms, including the media and misinformation within their ambit. These laws impact negatively on the protection of freedom of expression and democratic principles.
12.3 The Relationship between Disinformation and the Rationales for Freedom of Expression in the African Context
The earlier contextual background confirmed that, in most African states, the public harm that disinformation causes is the spread of false information aimed at influencing public opinion during elections, silencing opposing views and undermining groups of persons perceived to have power (whether economic or political).Footnote 33 Often, the purpose of the dissemination is to gain or retain power and to control public revenues (which could potentially be misused through corruption).Footnote 34 The African context therefore demonstrates the strong connection between freedom of expression and its most commonly advanced rationale, namely that free speech is an integral component of the proper functioning of a democracy and helps to promote democratic self-government by the people. This is true both for the formation of a democracy and the strengthening of a developing or fragile democracy.Footnote 35
The democratic rationale for freedom of expression, which has influenced the development of free speech jurisprudence worldwide (including in South Africa),Footnote 36 is based on the premise that speech enables people to participate in political and social debate, to have access to governmental policies and to make informed decisions about how they are governed.Footnote 37 In sum, free expression is vital in a democracy for three reasons. Firstly, in a democracy, the people are assumed to be sovereign and should play a significant role in decision-making. Freedom of expression ensures that they receive relevant information to enable the exercise of democratic self-governance and to ensure ‘wise decisions by wise voters’.Footnote 38 Both censorship of information and the spread of disinformation can distort the thinking capacity of the community and undermine a democracy.Footnote 39 Those in power should thus not be able to manipulate the flow of public information and withhold public debate.Footnote 40
Secondly, freedom of expression acts as a check and balance against the abuse of power and promotes governmental accountability and transparency. A democratic government should be regarded as the ‘servant’ of the people, entitling the latter to criticize their leaders, government officials and implemented policy, where necessary.Footnote 41 Here, the role of the media as public watchdog is critical.Footnote 42 So, any law that stifles a free media under the guise of false news regulation should be closely scrutinized to ensure that it does not encroach unduly upon freedom of expression.
Thirdly, free speech promotes social stability by allowing everyone to participate in public speech, resulting in the expression of diverse views and an informed and sovereign electorate.Footnote 43 Ironically, as has been shown, both disinformation and its regulation have the potential to undermine the democracy rationale. To ensure the integrity of freedom of expression, a balance must be struck between the two opposing interests so that only disinformation which causes public harm (as described earlier) is restricted.
For the regulation of disinformation, another three interconnected rationales – namely the truth rationale (the ‘marketplace of ideas’ metaphor),Footnote 44 the ‘fear of government suppression’ rationale and the ‘checking valve’ theory – are also important.Footnote 45 It is arguable that false information is incapable of advancing the truth, justifying the need for disinformation laws.Footnote 46 At face value this claim appears to have merit, but it fails to consider that the underlying basis of the rationale, as formulated by John Stuart Mill, is that views censored because of their supposed falsity may in fact be true and that their elimination increases the possibility of ‘exchanging error for truth’.Footnote 47 The public are more likely to learn the truth if exposed to varied views. Plus the suppression of information, even if potentially false, undermines the attainment of human knowledge, because censors are not infallible.Footnote 48 The banning of perceived false beliefs can be dangerous because it may suppress some true beliefs, impeding the search for the truthFootnote 49 and, in turn, permitting undue censorship.Footnote 50 This is especially problematic in the African context where false news laws are introduced to censor the media expressing views critical of those in power. The reality is that the impact is oppressive governments which usually retain power.Footnote 51
The fear of government suppression theory adds weight to the argument. Its proponent, Frederick Schauer, explains that ‘[f]reedom of speech is based … on a distrust of the ability of government to make the necessary distinctions, a distrust of governmental determinations of truth and falsity, an appreciation of the fallibility of political leaders, and a somewhat deeper distrust of governmental power’.Footnote 52 Governments are considered ‘particularly bad’ at regulating speech, with ample historical evidence of censorship gone wrong.Footnote 53 Factually, censorship of public speech is usually entrusted to government officials, who are inclined to be biased and may suppress speech critical of government in order to retain power.Footnote 54 The African regional and sub-regional courts have regularly used this rationale to declare overly broad speech restrictions illegitimate.
The ‘government suppression’ rationale is closely linked to the ‘checking valve’ theory, proposed as a ‘vital’ supplement to the traditional free speech values.Footnote 55 Noting the abuse of government power as ‘an especially serious evil’, the claim is that free speech, in conjunction with a free press, provides an important check on government authority through exposure and deterrence.Footnote 56 The scrutiny and exposure of government activities by organized, well-financed and professional critics (the press) ensures that corrective action can be taken when an abuse of power occurs and, if aware of public scrutiny, officials are less likely ‘to yield to the inevitable temptation presented to those with power to act in corrupt and arbitrary ways’.Footnote 57 The distrust of government censorship rationale resonates in South African jurisprudence, with the judgment in S v. Mamabolo (E-tv Intervening)Footnote 58 serving as the best example. Here, it was held that ‘[h]aving regard to our recent past of thought control, censorship and enforced conformity to governmental theories … we should be particularly astute to outlaw any form of thought-control, however respectably dressed’.Footnote 59
Insofar as the regulation of disinformation is concerned, care must be taken to ensure that robust (but fair) criticism of politicians and government officials is not misconstrued as false or fake news and suppressed as such. For African states, tightly drafted laws are crucial to ensure the promotion of accountability, transparency, good governance and democratic values and principles.Footnote 60 As noted by Robert Keohane: ‘rulers generally dislike being held accountable. Yet they often have reasons to submit to accountability mechanisms. In a democratic … system, accountability may be essential to maintaining public confidence … But we can expect power holders to seek to avoid accountability when they can do so without jeopardizing other goals’.Footnote 61 When it comes to elections, accountability is even more pressing, because as Agnès Callamard has observed, state accountability is impossible ‘without a fully functioning parliament and free and fair elections, all of which require respect for freedom of … expression, transparency, [and] freedom of information’.Footnote 62 A free and robust media is an added safeguard for state accountability. False news laws which include the media in their ambit have a chilling effect on freedom of expression and the ability of the media to report and investigate freely and without fear of sanction. Similarly, they hamper the capacity of citizens to participate democratically, contribute to decision-making and enrich democratic pluralism.Footnote 63
It is now necessary to move to African regional law, noting upfront that the African Commission on Human and Peoples Rights (the African Commission) has repeatedly stressed the importance of freedom of expression and the role of the media, both as a human right and to achieve state accountability. Indeed, it has also emphasized that freedom of expression and the right of access to information held by public bodies promotes public transparency, accountability, good governance and the strengthening of democracy.Footnote 64
12.4 Freedom of Expression and Democratic Principles in African Regional Law
This section commences with an explanation of how the African Charter protects the free flow of information and freedom of expression. The textual limitations to the rights are also addressed. It then moves to the work done by the African Commission and the African Court on Human and Peoples’ Rights (the African Court), including sub-regional courts, to advance the right to freedom of expression and a free media, focusing on the link between expression and the advancement of democratic governance.
12.4.1 The African Charter
Article 9(1) of the African CharterFootnote 65 provides that every individual has the right to receive information.Footnote 66 Article 9(2) goes on to provide every individual with the right to express and disseminate opinions within the constraints of the law. Whilst Article 9(2) recognizes that the right to freedom of expression can be limited, the ‘within the constraints of the law clause’ has caused difficulty. It has, however, been interpreted to mean that only domestic restrictions consistent with state parties’ international and Charter obligations are permissible.Footnote 67 This means that laws enacted to regulate disinformation must comply with the standard legitimacy and proportionality tests for limitations to freedom of expression in international law, both offline and on digital platforms.Footnote 68
The African Charter, however, does not contain limitation clauses as in most international human rights law instruments.Footnote 69 Nevertheless, Article 9 of the Charter must be read with Article 27(2), which provides that all rights and freedoms are to be exercised with due regard for the ‘rights of others, collective security, morality and common interest’. This is a general limitation clause.Footnote 70 Arguably, false news laws could be introduced to protect the rights of others, the collective security, morality and the common interest, although, as mentioned, any restrictions to Charter rights must also comply with binding international law.Footnote 71
Article 29 of the African Charter, known as the ‘duty clause’, can also limit freedom of expression. Article 29(4) is the most relevant and places a duty on individuals to conserve and enforce national harmony. Article 29(3) imposes a duty on individuals not to compromise state security, and Article 29(7) provides that individuals should respect African cultural values in their interactions. The totality of these duties entails that individuals should contribute towards the integrity of society and respect diversity and tolerance. To the extent that the dissemination of false news does not further these goals, it could conceivably be prohibited by Article 29, but within the parameters of relevant international law.Footnote 72
12.4.2 The Normative Framework Created by the African Commission
Despite these potential limitations to freedom of expression in the Charter, the Commission has repeatedly emphasized that freedom of expression advances democratic principles. For example, in 1989, in one of its earliest Communications, the Commission stated that expression is a fundamental human right, vital for an individual’s self-development, political consciousness and participation in public affairs.Footnote 73 In its 2002 Declaration of Principles on Freedom of Expression in Africa, which developed the scope and content of Article 9, the Commission affirmed that freedom of expression is a ‘fundamental and inalienable human right and an indispensable component of a democracy’, and that any interference with freedom of expression ‘must not be arbitrary, must be provided for by law, must serve a legitimate interest and be necessary in a democratic society’.Footnote 74
The Declaration treats press and media freedom as vital and recommends self-regulation as the best system for promoting high media standards (Principle IX). The broadcast media may be more strictly regulated than print media,Footnote 75 but such regulation must comply with the legitimate restrictions for freedom of expression in international law. Thus, Principle V of the Declaration stresses that ‘[s]tates shall encourage a diverse, independent private broadcasting sector. A State monopoly over broadcasting is not compatible with the right to freedom of expression’. Where self-regulation has been futile, Principle VII permits public authorities to exercise limited media regulation, if they do not operate in a quasi-judicial manner and remain independent of state control.Footnote 76
In 2004, at its thirty-sixth Ordinary Session in Senegal, the Commission established a Special Rapporteur of Freedom of Expression in Africa.Footnote 77 The Commission has constantly renewed the mandate of the Special Rapporteur and extended it to include ‘Access to Information’.Footnote 78 Since then, the Rapporteur has played a prominent role in advancing the soft law normative standards for the protection of freedom of expression and access to information in Africa. For example, in 2012 and in 2016 the Commission modified the 2002 Declaration to address the right of access to information and freedom of expression in the digital age. It also adopted a Model Law on Access to Information for Africa, plus Guidelines on Access to Information and Elections in Africa, in 2013 and 2017, respectively.Footnote 79
In 2019, again led by the Rapporteur, the Commission adopted a revised Declaration of Principles on Freedom of Expression and Access to Information in Africa.Footnote 80 The aim was to consolidate the developments on freedom of expression and access to information, guided by African and international human rights standards, including the jurisprudence of African judicial bodies.Footnote 81 The revised Declaration has five parts, which include general principles and specific principles on freedom of expression and access to information respectively. The Preamble notes that the protection of freedom of expression and the free flow of information and ideas, especially through print, broadcast media and the Internet, is directly linked to facilitating and strengthening democracy. In turn, a strong democracy fosters transparency and efficiency. States parties must also create a framework which promotes freedom of expression and the right of access to information. This includes reviewing criminal restrictions on expression so that they are justified and aligned with international human rights law standards by, inter alia, amending overly broad criminal laws on sedition, insult and the publication of false news. The Commission has also specifically called for the abolition of domestic criminal defamation laws,Footnote 82 especially those that target journalists and permit detention as a sanction.
12.4.3 African Jurisprudence Addressing Speech Restrictions
The legitimacy of domestic speech restrictions has been addressed head-on by the African Court on Human and Peoples’ Rights (the African Court)Footnote 83 and the African Commission in two important cases, namely Scanlen and Holderness v. ZimbabweFootnote 84 and Konaté v. Burkina Faso.Footnote 85 The complainants in Scanlen alleged that Zimbabwe’s Access to Information and Protection of Privacy Act, 2003Footnote 86 infringed Article 9(2) of the African Charter, because it required the accreditation of journalists and created the offence of ‘publication of falsehoods’. Referring to the 2002 Declaration, the African Court held that whilst freedom of expression may be limited by domestic laws aimed at protecting individuals and the public from journalistic practices deviating from legitimate interests in a democracy, such laws must conform to international law standards.Footnote 87 Zimbabwe’s contention that the registration of journalists and the criminalization of falsehoods were justified on the grounds of public order, safety and the protection of the rights and reputation of others was rejected and held to be an unjustified restriction of freedom of expression.Footnote 88
In Konaté,Footnote 89 the applicant, a newspaper editor, was charged with criminal defamation. The applicant had published articles in which he accused the Prosecutor of Burkina Faso of corruption and criminal activity. The applicant was convicted on all charges and sentenced to twelve months’ imprisonment, plus a hefty fine.Footnote 90 After analysing Burkina Faso’s criminal defamation laws, the African Court declared that the domestic law criminalizing defamation with a custodial sentence violated Article 9 of the Charter, Article 19 of the International Covenant on Civil and Political Rights (ICCPR) and Article 66(2) of the Revised Treaty of the Economic Community of West African States (ECOWAS).Footnote 91 The Court found that Burkina Faso had not demonstrated that imprisonment was a necessary limitation to freedom of expression to protect the reputation of legal officers. It also held that apart from ‘serious and very exceptional circumstances’ involving incitement to crimes or hate speech, ‘violations of laws on freedom of speech and the press cannot be sanctioned by custodial sentences’.Footnote 92 Burkina Faso’s legislationFootnote 93 thus constituted a disproportionate interference with a journalist’s right to freedom of expression.Footnote 94 This decision illustrates that criminal prosecution and imprisonment for alleged defamation of public officials is neither a ‘necessary’ nor ‘proportionate’ state interference with freedom of expression, because less intrusive measures are available for remedying injuries to individual reputation, namely civil defamation remedies.Footnote 95 The same principles apply to false news laws, as held in Scanlen.
More recently, in 2018, the Community Court of Justice of the Economic Community of West African States (the ECOWAS Court) decided Federation of African Journalists v. The Gambia.Footnote 96 The case was launched by the Federation (representing Gambian journalists broadly) and four Gambian journalists (forced into exile).Footnote 97 The journalists had been prosecuted and tortured whilst in custody for violating the Gambia’s press laws, specifically speech criticizing the President and officials. The applicants asked for a declaration that the criminal offences of sedition, false news and defamation in the Gambian Criminal CodeFootnote 98 violated the right to freedom of expression in Article 9 of the African Charter, Article 19 of the ICCPR and the rights of journalists under Article 66(2) of the Revised ECOWAS Treaty.Footnote 99 The basis of the journalists’ complaint was that the Gambian laws had made it impossible for them to disseminate information in the public interest freely.Footnote 100 They claimed that the laws had a chilling effect on press freedom by creating a fear of potential arrest and prosecution for publishing information critiquing the government. Whilst acknowledging that limitations to freedom of expression are permissible, the journalists also argued that the laws were imprecise and overly broad.Footnote 101 Regarding the false news offence specifically, the journalists accepted that journalistic errors can occur, but claimed that the imposition of criminal liability for such errors infringed the right to freedom of expressionFootnote 102 and that the law did not serve a legitimate purpose.Footnote 103
The ECOWAS Court analysed comparative international and foreign law on the right to freedom of expression and freedom of the press, stressing that vague criminal offences undermine the enjoyment of the right.Footnote 104 It used this jurisprudence to hold that narrowly drafted criminal offences were needed to regulate free speech because of the ‘chilling effect’ created by wide and vague censorship restrictions. Holding that erroneous statements are inevitable in free debate, the Court relied on Konaté to find that individuals in ‘highly visible public roles must necessarily face a higher degree of criticism than private citizens; otherwise public debate may be stifled altogether’.Footnote 105 The Court concluded that the criminal laws of the Gambia, which included a false news offence, did not guarantee a free press in accordance with the African Charter and international law. The laws had a chilling effect, unduly restricted expression and the press and were disproportionate and unnecessary ‘in a democratic society where freedom of speech is a guaranteed right’.Footnote 106 The Court thus ordered that the impugned laws be reviewed and decriminalized to conform with freedom of expression.Footnote 107
The decisions in Federation of African Journalists and Konaté were confirmed in 2020 by the ECOWAS Court in Incorporated Trustees of Laws and Rights Awareness Initiatives v. The Federal Republic of Nigeria.Footnote 108 Here, the Court held that a criminal sanction in the Nigerian Cybercrime Act 2015, penalizing expression offensive to ‘honour’, reputation and morals, violated the African Charter as being a disproportionate restriction to freedom of expression.Footnote 109
12.4.4 The Relationship between Freedom of Expression and Democratic Governance
The most recent development emanating from the African Commission on the link between media freedom and democratic principles is a September 2023 statement by the Special Rapporteur that ‘[t]he right to access information is … a key component of democracy, … when people are able to access information about how their Government is performing, they can exercise their right to freedom of expression more meaningfully. Individuals need to have access to reliable sources … to form an accurate opinion’. The Rapporteur added that the information right is both a human right and an indispensable tool empowering citizens to participate publicly and demand state accountability. The role of the media is essential. Accordingly, the Rapporteur recommends that states adopt laws guaranteeing the right of every individual ‘to receive information’ as per the African Charter, because despite efforts to protect the expression and information rights, African domestic law does not facilitate such rights.Footnote 110
Non-compliance at domestic level occurs even though Article 13 of the African Charter provides that every citizen shall have the right to participate freely in the government of their country, either directly or through freely chosen representatives in accordance with the law. Moreover, the African Charter on Democracy, Elections and Governance declares that regular, free and fair elections are the basis of a legitimate government.Footnote 111 The Democracy Charter specifically emphasises the link between the promotion of democracy, the rule of law and human rights, including free expression.Footnote 112 The rationale of the Guidelines on Access to Information and Elections in Africa, published by the African Commission in 2017, states the need to ensure freedom of expression and access to information during elections.Footnote 113 Including within their ambit a wide range of public bodies, such as political parties, election observers, the media and internet intermediaries, the Guidelines record that disclosure of information enabling the public to participate actively in public affairs is needed, plus transparency and accountability.Footnote 114 The Preface notes the importance of ‘access to accurate, credible and reliable information’. This is reinforced by Section 25 which provides that regulatory bodies must enact regulations to promote ‘fair and balanced coverage of the electoral process’, whether offline or in the digital space. Internet shutdowns are also addressed, with the Guidelines calling on states not to block the Internet or restrict media freedom during elections.Footnote 115 Should restrictions be needed, their legitimacy will be tested against the international standard of legality, legitimacy, necessity and proportionality for the limitations of rights.Footnote 116
12.5 Instances of False News Dissemination and Its Regulation in Domestic African States
Despite the strong normative framework for the protection of freedom of expression and the press at regional level, most African states continue to regulate false news, mainly through criminal sanctions. Internet shutdowns are also frequently implemented. Another reality is the repeated political manipulation of information, often with the assistance of powerful actors, aimed at retaining power and control of public finances.Footnote 117 These campaigns make it increasingly difficult for the public to discern the truth, which undermines the ability to make decisions (whether personally or in relation to public matters) and to participate in democratic processes in an informed manner.Footnote 118
Research conducted by the African Centre for Strategic Studies from 2020 to 2022 has revealed a vast array of disinformation schemes in African states, especially on digital and social media platforms.Footnote 119 Those responsible for the dissemination of this ‘information’ include political parties, individual politicians and state and non-state actors from beyond Africa (who create, inter alia, fake social media accounts, hashtags and messages designed to boost the support of leaders sympathetic and amenable to the actor or state’s particular cause).Footnote 120 This research and many other studies show that the target countries include Nigeria,Footnote 121 Kenya,Footnote 122 Ghana, Mali, Cameroon,Footnote 123 Tanzania, Ethiopia,Footnote 124 GuineaFootnote 125 and Sudan,Footnote 126 amongst others.Footnote 127
It is therefore not surprising that many African states have either enacted false news laws or made use of the existing common law colonial-era crimes of defamation and libel to punish the dissemination of information considered misleading or false.Footnote 128 Unfortunately, African false news laws are also usually framed in broad terms and criminalize, inter alia, the spreading of false rumours, insults and complaints against government or public authorities; the fostering of dissent and unrest between sections of the community through the publication of false news; and the uttering of hate speech designed to incite hatred, violence or any type of disturbance on grounds such as race, religion and ethnicity.Footnote 129 Most of these laws are justified to protect national security and social harmony.
The irony of such regulation, however, is that the targets are usually journalists and those critical of authoritarian governments, the real aim being to silence opposition and to enable existing regimes to maintain political control. The consequence is a severe impact on democratic principles in African states,Footnote 130 which Charles Fombad has labelled the ‘crisis of democracy in Africa’.Footnote 131 Fombad claims that
Many recent elections … have degenerated into little more than competitive authoritarianism. This is because democratic reforms and periodic elections of the past two decades have come to be increasingly used as a ‘survival strategy’ by Africa’s autocratic rulers. Elections … come in handy to keep opposition parties in the political game, lest the regimes lose their democratic façade while incumbents perpetuate their rule.Footnote 132
There are some African states, however, which have adopted freedom of information laws that enhance expression, press freedom and democratic principles. These include Namibia, Botswana and Zambia.Footnote 133 But, as demonstrated in Section 12.6, South Africa is the outlier, with the Constitutional Court using the Bill of Rights in South Africa’s Constitution, 1996, to protect and promote freedom of expression, political rights and democratic governance, even in the face of attempts to silence the press and state condemnation of the courts for interfering in executive matters.
12.6 The South African Law
This section will address the way in which freedom of expression and the right to free and fair elections are protected in South African law. Starting with the constitutional framework setting the normative benchmarks for the relevant rights and their legitimate restriction, how South Africa regulates false news is then explained. It will be shown that despite many attempts by the state to stifle media freedom, the South African judiciary has consistently endorsed and promoted a free flow of information and debate (both generally and during elections). This approach is informed by the need to protect the status of the constitutional democracy and open and accountable governance, which underpins the Constitution, and stands in stark contrast to the apartheid approach, where state censorship was rife.
12.6.1 The Constitutional Protection of Freedom of Expression
Section 16 of the South African Constitution entrenches the right to freedom of expression. It provides:
16. Freedom of Expression
(1) Everyone has the right to freedom of expression, which includes –
a) freedom of the press and other media;
b) freedom to receive or impart information or ideas;
c) freedom of artistic creativity;
d) academic freedom and freedom of scientific research.
(2) The right in sub-section (1) does not extend to –
a) propaganda for war;
b) incitement of imminent violence;
c) advocacy of hatred that is based on race, ethnicity, gender or religion, and that constitutes incitement to cause harm.
The South African courts have confirmed the value of freedom of expression in a democratic society on many occasions. For example, in South African National Defence Force Union v. Minister of DefenceFootnote 134 Judge O’Regan held that expression plays a significant role as ‘a guarantor of democracy’ and facilitates the ‘moral agency’ of society, permitting individuals to form and express opinions and ideas.Footnote 135 Nevertheless, freedom of expression is not an absolute guarantee; nor is it a paramount value.Footnote 136 It is one of a ‘web of mutually supporting rights’,Footnote 137 and must be interpreted in accordance with constitutional valuesFootnote 138 and other constitutionally protected rights, including the rights to human dignityFootnote 139 and to participate in free and fair elections.Footnote 140 It should also be exercised with ‘due deference’ to ‘the pursuit of national unity and reconciliation’.Footnote 141
The ambit of Section 16(1) is broad. The word ‘everyone’ includes natural and juristic persons,Footnote 142 citizens and non-citizens.Footnote 143 ‘Expression’ is protected, which is a wider concept than ‘speech’.Footnote 144 The Constitutional Court’s approach to freedom of expression cases is to define expression widely at the threshold stage, deferring the adjudication of the value of the expressive act in question to the limitation analysis in terms of Section 36 of the Constitution.Footnote 145 Therefore, in De Reuck v. Director of Public Prosecutions (WLD),Footnote 146 finding that child pornography was included within the ambit of expression, the Court held that the right ‘does not warrant a narrow reading’ and that any limitation ‘must satisfy the rigours of the limitation analysis’.Footnote 147 Confirming Handyside v. United Kingdom,Footnote 148 the Court found that the right to express oneself and the corresponding right to receive information and ideas extends ‘not only to “information” or “ideas” that are favourably received or regarded as inoffensive or as a matter of indifference but also to those that offend, shock or disturb’.Footnote 149 The wide interpretation of ‘expression’ means that the right has many components.Footnote 150 So, expressive acts such as flag burning, nude dancing,Footnote 151 the publication of photographs, the display of posters and works of art, dressFootnote 152 and symbolic gestures (such as salutes)Footnote 153 are included within expression and are prima facie worthy of constitutional protection.
It is only during the later proportionality enquiry that the value of the expression in issue is considered to determine the justifiable limitation of the right to free expression. Here, the Court must assess whether the expressive act promotes the rationales underpinning the right and distinguishes between expression that lies at the ‘periphery’ of the right as opposed to expression which unworthy of protection. In De Reuck, for example, the Court had no difficulty in holding that child pornography was ‘expression of little value’.Footnote 154
Section 16(1) enumerates four types of expression which are specifically listed. Whilst these are positioned at the core of the right (as opposed to its periphery), they should not be interpreted as being more valuable than other forms of unspecified expression. The listed categories simply expand upon the meaning of expression. They do not fix the scope of constitutionally protected expression. Other types of expression can also bear equal weight, even though not listed textually. The role played by political expression is clearly important given the emphasis on how free expression protects the democracy.
Freedom of the press and other media is specifically mentioned in Section 16(1)(a) because of the significant role that the media play in ensuring the promotion of a democracy. The South African courts have unfailingly confirmed the media’s role in a democratic society.Footnote 155 As indicated earlier, in Print Media the Court linked press freedom to a functioning democracy. The Court also confirmed that laws limiting press freedom must be closely monitored so as not to undermine the public’s right to a strong media. Similarly, in Mail and Guardian Media Ltd v. MJ Chipu NO (Chairperson of the Refugee Appeal Board), the Court held that the media is a ‘key facilitator’ of freedom of expression.Footnote 156 There is thus no doubt that media and press freedom is positioned at the core of the right and that laws restricting it will face a stiff challenge in the limitation analysis.
The role of the press and the media, however, is two-dimensional – their right is protected by Section 16(1)(a), but they must also fulfil their duty to society. The Court addressed this in Khumalo v. Holomisa,Footnote 157 holding that the media should not only rely on freedom of expression, but must also ‘foster’ it, and that people’s ability to function effectively in society depends on how the media fulfils its obligations. The media are thus both right bearers and ‘bearers of constitutional obligations’.Footnote 158
Section 16(1)(b) protects the right to receive and impart information and ideas – the dual aspect of expression.Footnote 159 This is an integral component of freedom of expression as the reception of information and ideas enables individuals to participate fully in public society, buttressing the constitutional values which envisage a responsive, accountable and open democratic state.Footnote 160 Accordingly, in Islamic Unity Convention v. Independent Broadcasting Authority,Footnote 161 the Court found that the Broadcasting Code in issue infringed not only the right of broadcasters to disseminate information, but also deprived the public of the right to receive diverse views.Footnote 162
The distinction between ideas and information is interesting, especially in the context of false news laws. The types of expressive acts classified as ‘information’ must clearly be distinguished from ‘ideas’, which term is usually widely defined to include opinions, thoughts, plans, creative works and so on. In The Citizen 1978 (Pty) Ltd v. McBride,Footnote 163 the Court held that ‘information’ includes ‘only factual statements’, as opposed to opinions and comments,Footnote 164 but this conclusion is debatable, as discussed below. The textual inclusion of ‘artistic creativity’ as a form of protected expression in Section 16(1)(c) is a consequence of strict censorship during apartheid.Footnote 165 The same is true for academic freedom. All forms of art are protected, including music, books, paintings and theatre productions.Footnote 166
An interesting question that the Court may have to determine when dealing with potential ‘fake news’ cases is whether manipulated photos and their ilk could be classed as artistic expression and thus be positioned at the core of the right. This is because the courts acknowledge that artists play a significant role in society by contributing to the existing dialogue and social debate, usually because their views can be controversial and are critical for the development of a vibrant culture in a democratic and functioning society.Footnote 167 Also, artists are often at risk of censorship, probably because their work is displayed in the public domain, engaging society and eliciting diverse reactions.Footnote 168
Section 16(2) lists the types of freedom of expression which are not constitutionally protected, namely propaganda for war, incitement of imminent violence and hate speech (which is strictly defined). The boundaries of the categories of expression excluded from constitutional protection are important, because Section 16(2) is definitional, and the Court has held that legislative measures which restrict expression beyond the scope of constitutional exclusion must be justified in terms of the general limitation clause. However, limitations falling within the strict parameters of Section 16(2) will not limit freedom of expression.Footnote 169 Plus, all limitations must be restrictively interpreted.
None of the listed exclusions to protected expression in Section 16(2) include false news regulation. This means that any law enacted to restrict the dissemination of information, whether false or not, will be treated as limiting freedom of expression and require justification in terms of the limitation clause, which is introduced in Section 12.6.3, following an examination of the protection of political rights in the Constitution.
12.6.2 Political Rights
Section 19 of the Constitution, headed political rights, reads as follows:
(1) Every citizen is free to make political choices which includes:
a) the right to form a political party;
b) to participate in the activities of, or recruit members for, a political party; and
c) to campaign for a political party or cause.
(2) Every citizen has the right to free, fair and regular elections for any legislative body established in terms of the Constitution.
(3) Every adult citizen has the right:
a) to vote in elections for any legislative body established in terms of the Constitution, and to do so in secret; and
b) to stand for public office, and if elected, to hold office.
This section is complemented by Section 1(d) of the Constitution – which guarantees a multi-party system of democratic government, to ensure accountability, responsiveness and openness.Footnote 170 These values were stressed in My Vote Counts NPC v. Minister of Justice and Correctional Services (My Vote Counts 11),Footnote 171 and arise by virtue of South Africa’s history under apartheid, where the majority of South Africans were denied the right to vote.Footnote 172
The political right is cast in generous and unqualified terms. Thus, the Court in Ramakatsa v. MagashuleFootnote 173 held that ‘the section means what it says … It guarantees freedom to make political choices and … safeguards a member’s participation in the activities of the [political] party concerned … It protects the exercise of the right not only against external interference but also against interference arising from within the party’. The right to vote thus upholds the democracy and is linked to human dignity – it is a ‘badge of dignity and of personhood’.Footnote 174
Section 19 must be applied and interpreted in its entirety. The Constitutional Court in NNP v. Government of South AfricaFootnote 175 held that the right to vote will echo a hollow ring without the right to free, fair and regular elections. As to the meaning of free and fair elections, in Kham v. Electoral Commission,Footnote 176 the Court held that the ‘free and fair’ requirement is singular and not a conjunction of two disparate elements. The term includes ‘both the freedom to participate in the electoral processes and the ability of the political parties and candidates, both aligned and non-aligned, to compete with one another on relatively equal terms’.Footnote 177
In the main, elections in South Africa are contested by political parties – they occupy centre stage and play a vital role in facilitating citizens’ political rights.Footnote 178 But although political parties have been described as ‘the engine of democracy in South Africa’, there is little regulation of their internal functioning. Nonetheless, as outlined in Ramakatsa, political parties must comply with the Constitution, their own rules, the Electoral Code of ConductFootnote 179 and the Electoral Commission Act.Footnote 180 Most importantly, rules of political parties must be consistent with the Constitution. As discussed in Section 12.6.6., the Electoral Act regulates the dissemination of false news during elections, the legitimacy of which was raised in the important case of Democratic Alliance v. African National Congress.Footnote 181
In My Votes Counts NPC v. Minister of Justice and Correctional Services,Footnote 182 the Court had to consider whether voters have a right to know who funds political parties and whether a right to vote includes an ‘informed vote’. A related but key question was whether political parties and the state have a duty to record, preserve and disclose the sources of their private funding. This issue required an analysis of whether South Africa’s Promotion of Access to Information Act (PAIA)Footnote 183 was unconstitutional because it failed to oblige political parties to record and disclose their private funding sources. The Court held that people are entitled to information held by political parties because such information is critical to the fulfilment of the political right, especially the right to vote. The Court gave three reasons. Firstly, citizens are entitled to make informed choices when voting so that their vote is an expression of their genuine will.Footnote 184 Secondly, the duty of disclosure helps to combat corruption and ensures that elected representatives serve the public interest, rather than the agendas of private entities or foreign governments.Footnote 185 Thirdly, this interpretation aligns with that in international law.Footnote 186 The Court added that it is not only voters who are entitled to disclosure but also the media and other agents that are obliged to educate the voting public.
The PAIA was thus declared unconstitutional to the extent that it excluded political parties from its ambit and did not require parties to preserve and record information about private funding and make it readily accessible to the public. Parliament was ordered to amend the PAIA or to enact new legislation to promote the effective exercise of the right to make political choices and to participate in elections. The PAIA has since been amended,Footnote 187 and the Political Party Funding ActFootnote 188 has been enacted to give effect to the Court’s order. It is clear that the South African Constitutional Court has aligned the rights to freedom of expression, access to information and free and fair elections to the founding constitutional values of democracy, freedom, responsiveness, transparency and accountability.
12.6.3 The General Limitation Clause
Section 36(1) provides as follows:
- 36
(1) The rights in the Bill of Rights may be limited only in terms of law of general application to the extent that the limitation is reasonable and justifiable in an open and democratic society based on human dignity, equality and freedom, taking into account all relevant factors including –
(a) the nature of the right;
(b) the importance of the purpose of the limitation;
(c) the nature and extent of the limitation;
(d) the relationship between the limitation and its purpose; and
(e) less restrictive means to achieve the purpose.
36 Limitation of Rights
Section 36(1) permits the justifiable infringement of a protected right if the limitation is both rational and proportional. The limitation must serve a ‘compellingly important’ purpose. A right can only be limited if the limitation will achieve its purpose and there is no other realistic way to achieve that purpose.Footnote 189
A two-stage analysis is adopted when rights are limited. In the first stage, the ambit of the right in issue is determined by way of an interpretative process – the ‘threshold stage’.Footnote 190 The right is usually interpreted generously, it being considered unnecessary ‘to shape the contours of the right in order to accommodate pressing social interests’.Footnote 191 Should a law of general application violate the protected scope of the right; then a second-stage justification evaluation must be conducted. Here, a broad assessment utilizing the Section 36(1) factors is undertaken to determine whether the infringement of the right is justifiable in an open and democratic society, based on human dignity, equality and freedom. The party arguing for the limitation (usually the state) bears the onus to discharge the ‘burden of justification’ by demonstrating that the rights infringement is justifiable.Footnote 192
To date, the South African courts have been very reluctant to permit the introduction of laws that limit press and media freedom. This is not only because of the impact of past censorship during the apartheid eraFootnote 193 but also because the courts have recognized the crucial role that the press play when it comes to protecting accountable and transparent governance, a key component of an open and democratic society based on human dignity, equality and freedom.Footnote 194 Laws that limit both freedom of expression and the right to vote will thus be strictly scrutinized by the courts during the Section 36 proportionality analysis.Footnote 195 The state will have to show that there is a legitimate need for any such law, with the law’s purpose being rationally connected to the outcome it aims to achieve.
South Africa does not have a specifically enacted legislated false news restriction. However, the dissemination of disinformation could be regulated via the common law of defamation and other pieces of legislation regulating cybercrimes, films and publications and elections. Some of these laws and how they have been interpreted and applied by the courts are now briefly introduced.
12.6.4 Defamation: The Common Law Civil Remedy and a Criminal Offence
Most of South African defamation cases are civil in nature,Footnote 196 aimed at protecting the good name or reputation of both natural and juristic persons.Footnote 197 Very few cases of criminal defamation are reported and there have been many calls to repeal the common law crime of defamation.Footnote 198 Civil defamation is addressed first, followed by criminal defamation. It will be shown that neither remedy has been used to address false news types of cases and moreover that the courts have developed the law of defamation to balance freedom of expression (of the media particularly) and the right to human dignity.
12.6.4.1 Defamation
A person whose reputation has been damaged by the publication of an intentional and unlawful (or wrongful) defamatory statement may claim damages from the wrongdoer, alternatively a so-called take-down order, often coupled with an order that the wrongdoer retract the statement and/or apologize.Footnote 199 Once the plaintiff proves that there has been publication of a defamatory statement,Footnote 200 the elements of wrongfulness and intention are presumed to have been met. The defendant must then prove a ground of justification (a defence) to rebut the presumptions.Footnote 201
In the constitutional era, the courts have taken active steps to develop the law of defamation, by balancing the rights to freedom of expression and human dignity.Footnote 202 Media freedom has benefited especially,Footnote 203 with the courts creating special grounds of justification for the media. So, in Holomisa v. Argus Newspapers,Footnote 204 the Court held that ‘a defamatory statement which relates to free and fair political activity is constitutionally protected, even if false, unless the plaintiff shows that, in all the circumstances of its publication, it was unreasonably made’. This created an exception to the rule that the defence of truth and public interest could be used only where a statement is factually true. Known as the reasonableness defence, the exception was introduced to protect press freedom to create leeway for false statements around free and fair political activity.
The defence was confirmed in National Media Ltd v. Bogoshi.Footnote 205 The Supreme Court of AppealFootnote 206 held that a publisher could avoid liability for defamation where, even if it could not prove that the statement was true, it could establish that publication was reasonable. The Court held that: ‘[T]he publication in the press of false defamatory allegations of fact will not be regarded as unlawful if, upon a consideration of all the circumstances of the case, it is found to have been reasonable to publish the particular facts in the particular way and at the particular time.’Footnote 207 Relevant factors include: whether the statement related to political discussion; the tone in which the report was written; the nature of the information on which the allegations were based; the reliability of their source; and steps taken to verify the information. The defence of reasonable publication was confirmed in Khumalo, where the Constitutional Court stressed that the mass media play a significant role in the protection of freedom of expression to enable individuals to receive and impart information and ideas.Footnote 208
The Khumalo court did, however, warn that ‘while a person cannot claim a strong constitutional interest in protecting their reputation against the publication of truthful but damaging statements, neither do publishers have a strong constitutional speech interest in the publication of false material’.Footnote 209 As shown below, this caution did not deter the Court from refusing to enforce a ban on the publication of false information during elections.
Most recently, in 2022, in Reddell v. Mineral Sands Resources (Pty) LtdFootnote 210 the Constitutional Court took further steps to protect freedom of expression, but in relation to whether trading companies could sue environmental activists for reputational loss.Footnote 211 The statements in issue (distributed on multiple platforms, including YouTube, as an e-book and on online news sites) accused the plaintiff mining companies of harming the environment. The environmentalists (as defendants) challenged the constitutionality of the common law defamation rule permitting trading companies to sue for non-patrimonial damages for reputational loss. They claimed that the existing law undermined their right to freedom of expression and that the companies, as juristic persons, could not rely on the right to inherent human dignity to justify a claim for reputational damage.Footnote 212
The majority of the Court agreed, holding that a claim for general damages ‘to a trading corporation for harm to its reputation infringes the … right to freedom of speech, specifically in relation to speech which is of public importance or which requires public debate and participation’.Footnote 213 The Court confirmed that the speech in issue was in the public interest (environmental harm) and was of considerable value in an open and democratic society. The Court stressed that the activists created a platform for public participation about environmental compliance by large mining companies and that such speech ‘warrant[s] a high standard of protection’.Footnote 214 It added that ‘discourse about matters that affect all or many of us are of grave public concern … must be encouraged and not stifled in a vibrant democracy like ours’.Footnote 215 This did not mean that the companies had no alternative relief: if necessary, they could apply for an interdict (in the form of a takedown order),Footnote 216 a declarator, a retraction or an apology.
The minority in Reddell, however, was not prepared to develop the common law of defamation to this extent. Whilst the minority’s focus was on the interpretation of the right to human dignity, it echoed Khumalo’s warning about the limited value of false information, especially in the digital age, given the reach of social media platforms.Footnote 217 It added that ‘[p]ublic discourse is speech that takes place in public. Social media is the town square writ large. It is pre-eminently the platform of public discourse. Issues of legitimate debate is a concept of bountiful elasticity. But a subject may be one of legitimate debate and yet what is said may be false, even hateful, and reputationally ruinous.’Footnote 218 Thus, the extent to which freedom of expression can be relied upon in defamation cases must consider ‘what speech is used, how it is used, and with what consequences’. Where a defamatory statement is a ‘blatant falsehood that does great reputational harm’, expression should not prevail.Footnote 219 It is noteworthy, of course, that the majority disagreed and protected critical expression where public participation is crucial.
It is thus clear that whilst the South African courts have not had to deal directly with a ‘false news’ type of case under the realm of defamation, the courts are aware that such a case would require careful consideration given the value placed on freedom of expression. Whatever the future outcome, it is certainly highly unlikely that the courts would permit a criminal sanction for defamatory speech, as is now demonstrated.
12.6.4.2 Criminal Defamation
The crime of defamation is defined as the unlawful and intentional publication of a matter concerning another person which tends to injure their reputation. In 2008 in Hoho v. SFootnote 220 the Supreme Court of Appeal had to consider whether the crime of defamation still exists in South African law and whether it is constitutionally legitimate. The accused, Luzuko Hoho, had been convicted of twenty-two counts of criminal defamation and was sentenced to three years’ imprisonment suspended for five years and, in addition, to three years’ correctional supervision. Hoho was a researcher employed by a Provincial Legislature and had published various leaflets in which he defamed the Premier, the Speaker and various other politicians. He accused them of corruption, bribery, financial embezzlement, sexual impropriety, illegal abortion and fraud.
Hoho raised various defences, including that the crime of defamation no longer existed in South African law and that, even if it did, it was unconstitutional. The Supreme Court of Appeal disagreed. It held that the crime had not been abrogated by disuse and that it did not unjustifiably infringe the right to freedom of expression.Footnote 221 The Court reasoned that whilst a criminal sanction is a drastic measure, the limitation to freedom of expression was balanced by the onerous burden of proof borne by the state in criminal cases, the parallel between the need to protect physical integrity (assault) and injury to reputation, and the fact that there was still a need for a criminal sanction to protect a person’s reputation.Footnote 222
Despite this, it is anticipated that legislation may be passed soon to decriminalize defamation in South Africa. Indeed, as far back as September 2015, the state announced that it would introduce legislation to decriminalize defamation on the grounds that it unjustifiably infringes the right to freedom of expression.Footnote 223 Whilst the legislation is yet to be tabled, it is highly likely that the current Constitutional Court would have no difficulty in declaring criminal defamation unconstitutional, especially given the calls by the African Commission to decriminalize defamation and the added protection the Court has given to freedom of expression in civil defamation cases.
12.6.5 The Cybercrimes Act 2020
The Cybercrimes Act, which criminalizes unlawful activities in cyberspace, commenced on 1 December 2021.Footnote 224 The Act was introduced because it was recognized that the existing common law crimes were incapable of regulating criminal conduct committed online.Footnote 225 Advances in digital technology amplified the need for the Act, aggravated by the ease with which cybercrimes such as fraud, extortion, forgery, child pornographyFootnote 226 and hacking could be committed. As these crimes became more prevalent, it was obvious that legislated criminal offences, with a specifically adapted procedural framework, were needed to regulate unlawful conduct committed online.Footnote 227
The long title to the Act states that it was enacted to create and penalize cybercrimes.Footnote 228 Chapter 2 of the Act contains five substantive criminal law segments. Part I regulates cybercrimes recodified from existing common law crimes,Footnote 229 and adds new offences. These include disclosure of an electronic data message that causes damage to property or violence against a person or group of persons,Footnote 230 the unlawful and intentional disclosure of a data message of an intimate image of a personFootnote 231 and the so-called malicious communication crimes.Footnote 232 Part III creates offences in the context of various cybercrime activities, such as attempting, aiding, inducing, inciting or instigating a person to commit a specified offence. Part IV deals with competent verdicts and Part V permits the grant of court orders to protect complainants from the harmful effects of malicious communications. Provisions are also created to regulate the obligations of electronic communications service providers and financial institutions to report cybercrime offences and to preserve information relevant to an investigation.
From a disinformation perspective, it is interesting that the Act does not contain a specific provision criminalizing the dissemination of false data (or news) intended to cause harm. Instead, such conduct would have to be addressed in terms of either cyber fraudFootnote 233 or cyber forgery and uttering offences,Footnote 234 which criminalize, inter alia, the unlawful use/passing off of false data or a misrepresentation with the intention of defrauding another person to cause harm, or the malicious communications provisions. The latter offences are intended to capture within their ambit the electronic communication of data messages which are published with the intention of inciting damage to persons or their property based on identifiable group characteristics.Footnote 235 It is thus clear that the Cybercrimes Act was not enacted to regulate false news or disinformation, a conclusion which is supported by the fact that such a purpose is not included in the Act’s objectives.Footnote 236
12.6.6 The Electoral Act 78 of 1993
The Electoral Act is the domestic legislation that regulates and gives normative content to the right to free and fair elections and the right to vote. The Act contains various provisions which prohibit certain types of conduct by political parties and other actors during elections. The aim is to ensure the achievement of free and fair elections. There are seven main prohibitions. One of these is Section 89(2)(c) of the Act which prohibits the publication of intentionally false statements, with the intention of influencing the conduct or outcome of an election. Any person who contravenes the section is guilty of a criminal offence and may be fined or imprisoned for up to ten years.
Schedule 1 to the Electoral Act contains the Electoral Code of Conduct.Footnote 237 Like the Act, it prohibits certain types of conduct to promote ‘(a) tolerance of democratic political activity, and (b) free political campaigning and open public debate’. Item 9 of the Electoral Code of Conduct provides that no registered party or candidate may publish false or defamatory allegations in connection with an election. This part of the Code must be read with Item 4 thereof, which records that freedom of political expression is a core component of a free and fair election.
These provisions were interpreted by the Constitutional Court in Democratic Alliance v. African National Congress.Footnote 238 The case concerned an SMS sent by the Democratic Alliance (DA)Footnote 239 to 1,593,682 persons in the Gauteng province, approximately six weeks before the date set for the 2014 national elections. The SMS read: ‘The Nkandla report shows how Zuma stole your money to build his R246mFootnote 240 home. Vote DA on 7 May to beat corruption. Together for change.’ The SMS was based on the Nkandla Report, penned by the Public Protector,Footnote 241 released a day before the SMS was sent and reporting that President Zuma had improperly used public finances for security upgrades to his private residence (Nkandla). The African National Congress (the ANC – and Zuma’s party) launched an application asking for a declaration that the SMS violated the Electoral Act and the Code. The ANC requested an order restraining the DA from re-disseminating the message and a retraction. The ANC argued that the SMS alleged that the Nkandla Report stated that President Zuma had committed theft, but that this was not the case, and that the SMS therefore contained false information published with the intention of influencing an election in breach of the Electoral Act.
In response, whilst accepting the constitutional validity of the Electoral Act provisions, the DA denied that the SMS was false. It argued that the SMS meant that the Nkandla Report merely demonstrated how Zuma had misused public funds and that ‘read in light of the Nkandla Report, the SMS express[ed] an opinion that a fair person might honestly and genuinely hold in light of the facts in the Report, and the Report must be understood and read in its totality’.Footnote 242 A key issue therefore was whether the SMS amounted to an expression of comment or opinion as opposed to a statement of fact.
To address this question, the Court had to interpret Section 89(2)(c) of the Act, as read with the Code. The Court opted for a restrictive interpretation because of the principle that legislation limiting a right (here freedom of expression) should not be interpreted broadly, especially when cast as a criminal sanction.Footnote 243 Recognizing that freedom of expression serves many purposes, including individual autonomy and the promotion of a vibrant democracy, the Court stressed the need for active participation by informed voters during elections. Linking the right to the apartheid struggle and the censorship that existed then, the Court held that ‘[i]n celebrating the democracy we have created, we rejoice as much in the right to vote as in the freedom to speak that makes that right meaningful. An election without as much freedom to speak as is constitutionally permissible would be stunted and inefficient’.Footnote 244
Another important factor was that the right to freedom of expression underpins many of the other constitutionally protected rights, which together:
protect the rights of … like-minded people to foster and propagate their views. They confirm the importance, both for a democracy and the individuals who comprise it, of being able to form and express opinions – particularly controversial or unpopular views, or those that inconvenience the powerful. The corollary is tolerance. We have to put up with views we don’t like … It means the public airing of disagreements. And it means refusing to silence unpopular views.Footnote 245
In the electoral context, public debate is especially valuable because it contributes to ‘opinion-forming and holds public office-bearers and candidates for public office accountable’.Footnote 246 Importantly, for open and transparent governance, the Court added that:
Political life in democratic South Africa has seldom been polite, orderly and restrained. It has always been loud, rowdy and fractious. That is no bad thing. Within the boundaries the Constitution sets, it is good for democracy, good for social life and good for individuals to permit as much open and vigorous discussion of public affairs as possible.Footnote 247
Having reached this conclusion, the next question was ‘what kinds of ‘information’ and ‘allegations’ were included in the prohibition in Section 89(2)’. In other words, were both factually incorrect statements and expressions of opinion prohibited or only the former?Footnote 248 The answer, according to the majority, was that only false statements or information were prohibited,Footnote 249 and that the SMS was clearly an opinion, alternatively a comment, as it appeared in the Report, to which it directly referred for its authority. Thus, the DA had not violated the Act.Footnote 250 This decision, whilst controversial at the time, demonstrates that the South African courts take their constitutional mandate in Section 165 of the Constitution seriously – that is, the duty to uphold the Constitution and to apply it, without fear or favour. The consequence is that laws regulating false news or information, especially when cast as penal measures, are unlikely to be condoned by the courts.
12.6.7 COVID Regulations under the Disaster Management Act 2002
The first false news laws in South Africa were prompted by the COVID-19 pandemic, under the Disaster Management Act 2002 (DMA), giving the executive extensive powers, including the power to implement legislation forthwith and without consultation. South Africa declared a national state of disasterFootnote 251 in terms of the DMA in March 2020 in response to the pandemic.Footnote 252 A mandatory twenty-one-day lockdown commenced on 25 March 2020.Footnote 253 This resulted in the closure of schools, universities, churches and businesses, with freedom of movement being severely restricted.Footnote 254 The lockdown was extended repeatedly through regulations authorized by the DMA and operated at different levels, depending on the rise in the number of COVID-19 cases,Footnote 255 but officially ended on 4 April 2022.Footnote 256
Disasters in terms of the DMA are classified according to whether they are local, provincial or national. A disaster is treated as a national disaster if it affects more than one province (Section 26(1)), which was clearly the case during the pandemic. The consequence was that the national executive became primarily responsible for the coordination and management of the crisis. A minister designated by the President was given the power to make regulations or issue directions concerning a wide range of matters,Footnote 257 including, inter alia, ‘other steps that may be necessary to prevent an escalation of the disaster, or to alleviate, contain and minimize the effects of the disaster’. Whilst there were numerous challenges to the constitutionality of the Act and the regulations issued thereunder, most of these challenges were ineffective, the courts deferring to the state’s prerogative to manage the pandemic and justifying the restriction of rights on the basis that the lockdown regulations were a legitimate and rational response to both a national and international crisis.Footnote 258
From a freedom of expression perspective, of particular concern was a regulation which made it an offence to ‘publish a statement through any medium with the intention to deceive about a narrow range of information related to the transmission of the virus, personal infection status and government measures to address the pandemic’. As a criminal sanction, if convicted, an accused could be penalized by a fine or imprisonment for six months (or both).Footnote 259 This prohibition was introduced as soon as the lockdown was announced and was intended to protect public health and prevent the spread of rumours about the virus, the impact of vaccines and so on. The regulation was in fact implemented, with arrests reported (an accused was alleged to have disseminated false news about test kits) and the government operating a reporting system, which it named Real411.Footnote 260 People were encouraged to report ‘disinformation’ via a mobile app, website or a WhatsApp number, and alleged false news incidents were then published on the government website, while awaiting verification by a Digital Complaints Committee, run by a non-governmental organization called Media Monitoring Africa.Footnote 261 Whilst monitoring independent of government was welcomed, the regulation attracted extensive critique, particularly because the government actively encouraged whistleblowing.Footnote 262
When the lockdown ended in April 2022 the regulations issued under the DMA were set aside and no longer applied. It is a serious worry, however, that the government has since attempted to use the DMA as a tool to manage other national emergencies, including Eskom loadshedding. The problem with this approach is that it permits governmental overreach, does not provide for parliamentary oversight and results in the introduction of legislation without following the ordinary constitutional rules for law-making.Footnote 263
12.7 Conclusion
The aim of this chapter was to present an analysis of the regulation of disinformation in Africa, focusing on African regional law and domestic false news laws in various African states, but with an emphasis on the South African law. The chapter revealed the tension between the need to protect freedom of expression and the right to free and fair elections in the context of a continent which is regularly subjected to disinformation campaigns aimed at undermining public participation in democratic governance and extending political control. Despite commendable efforts at the regional and sub-regional levels to promote the importance of a free press and media for the advancement of accountable and transparent governance in the digital age, the reality is that the dissemination of fake news in African states remains prevalent and poses a severe risk to democracy, especially as digital technology becomes more sophisticated. The harm caused by disinformation cannot be ignored given the fragile state of democracy in most African states. The commitment of the South African courts to the balancing of freedom of expression with the right to an informed vote, as a component of democratic governance, provides some hope, but a more sustained and globally integrated effort encompassing regulatory reform and promotional measures, including international partnerships, is needed if Africa is to withstand the threat of disinformation.
13.1 Introduction
Despite Kenya’s transformative and progressive 2010 Constitution, it is still grappling with a hybrid democracy, displaying both authoritarian and democratic traits. Scholars attribute this status to several factors, with a prominent one being the domination of the political order and wielding of political power by a few individuals and families with historical ties to patronage networks and informal power structures.Footnote 1 The persisting issues of electoral fraud, widespread corruption, media harassment, weak rule of law and governance challenges further contribute to the hybrid democracy status.Footnote 2 While the 2010 Constitution aims to restructure the state and enhance democratic institutions, the transition process is considered incomplete, especially since the judiciary’s role of judicial review is mostly faced with the difficult task of countering democratic regression.Footnote 3 Moreover, critical institutions such as the Independent Electoral and Boundaries Commission (IEBC) have faced criticism due to corruption scandals and perceptions of partisanship, eroding public trust in their ability to oversee fair elections effectively.Footnote 4
In the context of Kenya’s hybrid democracy, the new challenge posed by disinformation and misinformation has become a pressing concern, particularly in relation to elections. The selective use of information to manipulate political outcomes has led to a detrimental impact on the democratic process, contributing to issues such as voter apathy, ethnic polarisation and a sense of disorientation caused by the rise of misinformation. As this chapter highlights, technology can indeed serve as a tool to enhance democratic legitimacy and combat election rigging when properly implemented and managed. However, it is crucial to recognise that technology, especially digital information technology and social media, has become a double-edged sword. On the one hand, these platforms play a significant role in bolstering democracy by facilitating broader public participation, a fundamental aspect for any democratic society to thrive. Yet, on the other hand, the increased accessibility of the Internet has given rise to a new challenge: the spread of disinformation and misinformation.Footnote 5
This phenomenon poses a serious threat to the integrity of elections and democratic processes. As the overwhelming levels of disinformation and misinformation in Kenya’s 2022 General Election illustrate, misleading information can easily circulate, influencing public opinion and undermining the trust that citizens place in democratic institutions. The proliferation of false narratives skews the democratic playing field, making it difficult for voters to make informed decisions based on accurate and reliable information. Therefore, it is essential for the government, the election management body, civil society, tech companies and other stakeholders to tackle this issue collaboratively. As this chapter discusses, striking the right balance between the benefits of technology in enhancing democracy and countering its potentially negative effects requires concerted efforts to promote fact-checking, media literacy and responsible digital citizenship. By equipping citizens with the skills to distinguish accurate information from disinformation and misinformation, Kenya can safeguard the democratic ideals upon which the society is built and ensure that technology remains a tool for progress rather than a source of division. As Kenya prepares for future elections, addressing the challenges posed by disinformation and misinformation and their impact on the democratic process will be crucial in preserving the fairness and credibility of elections.
This chapter is organised as follows: Section 13.2 delves into the historical context of Kenya’s hybrid democracy and analyses the prevailing challenges in the electoral landscape. It investigates the complexities arising from ethnic politics and election rigging that have hindered the democratic process. Section 13.3 examines the role of technology in enhancing electoral integrity and legitimacy, particularly in relation to the role of technology in Kenya’s 2013 and 2017 general elections. It explores how the IEBC endeavours to leverage technology to administer elections effectively, aiming to curb election illegalities and irregularities. Section 13.4 sheds light on the growing challenge of disinformation and misinformation, particularly during the 2022 general election. It examines the weaponisation of fake news by politicians seeking electoral advantages, and the issues around technology and reliance on false information that arose in the 2022 presidential election petition. Section 13.5 looks at legal, institutional and non-legal measures for detecting and combating disinformation and misinformation. It discusses the pivotal roles various stakeholders such as the IEBC, tech companies and civil society actors play in detecting and combating false information. Further, it emphasises the importance of empowering citizens with media literacy and critical thinking skills to enable individuals to distinguish accurate information from misinformation. Section 13.6 then concludes. It offers a recapitulation of the findings and implications for Kenya’s electoral system.
13.2 Laying the Foundation: History of Kenya’s Hybrid Democracy and Challenges in the Electoral Landscape
The high stakes and divisiveness of elections is at the centre of Kenya’s woes and resultant cycle of ethnic voting and violence that follows every election.Footnote 6 Ethnic politics in Kenya dates back as early as the pre-colonial era, to colonisation and upon independence. The British colonisation of Kenya employed a ‘divide and rule’ policy that played ethnic groups against each other to prevent a unified nationalist opposition. This strategy was later adopted by post-colonial political elites who saw their ethnic group as their crucial asset when negotiating for ‘a seat at the national table’.Footnote 7 At the time of independence, Kenya, like many other newly independent African states, prioritised nation-building and socio-economic development. With a limited number of schools, scarcity of hospitals and inadequate infrastructure, the political leaders of the era directed their efforts towards addressing pressing challenges such as high levels of illiteracy, poverty and disease.Footnote 8
The authoritarian model, which provided the President and executive with significant authority to regulate the allocation and distribution of resources, was seen as best to tackle these issues.Footnote 9 However, to remove it from its colonial origins to fit the post-colonial agenda, its mission was changed and repackaged as being necessitated by the ‘development first’ agenda:Footnote 10 to win the war against poverty, ignorance and disease, ‘the development plan trumped the Constitution as the most important economic and political document of the state’ and hence a ‘worthy trade-off’.Footnote 11 In addition, one-party politics was preferred to multi-party politics due to the fear that the latter would encourage partisan mobilisation of subnational identities and loyalties manipulated by colonialism, ultimately undermining national unity and hindering the post-colonial development project’s success.Footnote 12 However, despite the ambitious development project, several challenges arose, including rampant corruption, stagnant or negative growth rates, worsening income inequality and a rise in absolute poverty.Footnote 13 As a result, the ‘development first’ initiative ultimately proved unsuccessful.
As public trust in the ruling elite waned, a troubling practice used during the colonial era re-emerged, where certain ethnically aligned sections of the citizenry, particularly those offering strong political support to the ruling elite, received preferential treatment and benefits in exchange for their loyalty.Footnote 14 This strategy was used by the ruling elite to maintain their grip on power and retain political support. It marked the inception of divisive ethnic politics in independent Kenya, characterised by election rigging through means such as ballot box stuffing, state-sponsored ethnic violence and a blatant disregard for the rule of law. Incumbents resorted to desperate measures to cling to power, turning elections into a zero-sum ‘winner takes all’ game. Those in control of the political order wielded significant influence, becoming the primary dispensers of favours and socio-economic benefits, leading to a system where loyalty to the ruling party was rewarded while dissenting voices faced marginalisation.Footnote 15 The emergence of ethnicised politics gave rise to profound and enduring ethno-regional inequalities, as politically dissident ethnic groups and their respective regions faced socio-economic marginalisation.Footnote 16 It also fuelled ethnic hatred and divisions.
Compounding this problem, a series of amendments to the 1963 Constitution exacerbated the establishment of an authoritarian governmental structure, concentrating power within the presidency and the executive under one-party rule.Footnote 17 This concentration of power disrupted the equilibrium between the executive and legislative branches, severely curtailing parliament’s ability to effectively supervise the actions of the executive.Footnote 18 Concurrently, the independence of the judiciary also suffered. Particularly noteworthy was the removal of judicial security of tenure for High Court and Court of Appeal judges in 1988, which rendered judges susceptible to removal at the discretion of the President.Footnote 19 Although this amendment was later repealed, its initial intention was to further consolidate executive power. The amendment weakened the judiciary’s credibility, subjected judges to the executive’s whims, and eroded the essential ‘safeguards necessary for maintaining fair administration, neutrality of public institutions, accountability of government, and the protection of rights in general’.Footnote 20 Consequently, the essential system of checks and balances necessary to uphold democracy within the three governmental branches was largely deficient, leading to adverse implications for the overall preservation of democratic principles.
The push for electoral reforms in Kenya took root in the 1990s when there was a growing demand for the repeal of section 2A from the 1963 independence Constitution. This section, introduced through a constitutional amendment in 1982, had transformed Kenya from a de facto to a de jure one-party state, leaving only the ruling party, the Kenya African National Union (KANU), in existence and outlawing political opposition.Footnote 21 This meant that individuals could only vie for political office if they were members of and nominated by KANU. The repeal of section 2A in 1991 and the subsequent restoration of multi-party politics was a significant milestone.Footnote 22 However, it soon became evident to the reform movement that this alone was ‘insufficient to democratise politics, usher in accountability and ensure responsive political policies’.Footnote 23 The 1963 Constitution, along with its various amendments, was deemed undemocratic, solidifying an authoritarian system of government, which made it difficult to challenge the ruling party’s political power.Footnote 24
The electoral landscape remained contentious, marred by ethnic hate-speech, violence and persistent allegations of rigging. This environment provided incumbents with a clear advantage, undermining the democratic process, and perpetuating their stay in power.Footnote 25 One of the key contributing factors to this situation was the partisan appointment of commissioners to the Electoral Commission of Kenya (ECK), which eroded public trust and raised questions about the Commission’s competence.Footnote 26 As a result of these challenges, there was an urgent need to establish mechanisms that would ensure transparent, inclusive, and credible elections. This imperative became a central agenda in the pursuit of constitutional reforms. The aim was to foster a system that could guarantee the integrity of the electoral process, promote fair representation, and restore public confidence in the democratic foundations of the country. The ECK made efforts to introduce technology before the 2002 and 2007 elections to support transparent and efficient administration of election processes, but it was generally slow in adopting the use of technology.Footnote 27 There was no clear strategy for technology integration and the process lacked statutory and regulatory backing. Moreover, stakeholders were not consulted, and staff were not properly trained on the use of technology in election administration.Footnote 28 The result was that technology had limited impact in results management, and a failure to report preliminary results in the December 2007 elections contributed to the widespread suspicion that the ECK lacked transparency and possibly manipulated the results.
After facing several challenges and false starts in the journey towards constitutional reform,Footnote 29 Kenya finally achieved a major milestone with the promulgation of a new constitution in 2010. This significant achievement followed a constitutional referendum, in which an overwhelming 67 per cent of Kenyans voted in favour of the Constitution. The success of the referendum can be attributed, in part, to the profound impact of the devastating 2007/2008 post-election violence that left a death toll of over 1,000 people and thousands more wounded and internally displaced.Footnote 30 This served as a wake-up call, highlighting the critical need for inclusivity and governance based on the rule of law rather than allegiance to a ruling elite or ethnic favouritism.
To further understand the changes Kenyans sought in the constitutional review process regarding elections, insights from the 2008 Independent Review Commission Report, commonly known as the ‘Kriegler Report’, which emerged following the 2007/2008 post-election violence, are valuable.Footnote 31 The Kriegler Report highlighted severe weaknesses in the ECK, leading to a loss of public confidence and institutional legitimacy. It thus recommended an overhaul of the electoral management process.Footnote 32 Consequently, the ECK was disbanded and replaced by the Interim Independent Electoral Commission (IIEC) and Interim Independent Boundaries Review Commission (IIBRC). These interim bodies were tasked with revamping the electoral management system, introducing technology in elections, creating a new voters’ register and proposing boundary delimitation reforms.Footnote 33 It is the IIEC that successfully conducted the constitutional referendum that led to the enactment of the 2010 Constitution.
13.3 Harnessing Technology for Electoral Integrity and Legitimacy: The Role of Technology in Kenya’s 2013 and 2017 General Elections
The history of Kenya’s elections has been marred by human interference through election rigging, voter bribery, state-sponsored ethnic violence and media blackouts during the announcement of presidential election results.Footnote 34 These issues continue to exacerbate the divisiveness of elections in Kenya. As highlighted in the previous section, the Kriegler Commission Report noted the reluctance and indifference of the ECK to take steps to integrate technology and develop systems for results transmission and procurement. This Commission considered various proposals, including one from the International Foundation for Electoral Systems, advocating the integration of technology to streamline and facilitate electoral processes.Footnote 35 While acknowledging the efforts made between the 2002 and 2007 elections, the Kriegler Commission concluded that the introduction of technology in elections was inevitable to address existing challenges and enhance the integrity and legitimacy of the electoral process.Footnote 36 Notably, scholars perceived the adoption of technology in elections as a means to improve ‘the administration of elections and increase voter confidence because of its perceived levels of accuracy, verifiability and efficiency as compared to manual systems’.Footnote 37 Consequently, the Kriegler Commission recommended that there be established ‘without delay … an integrated and secure tallying and data transmission system’.Footnote 38
The aim was to revamp the electoral management system and use technology as a key tool in election management processes, including the creation of a new voters’ register. Subsequently, the IEBC was established upon the promulgation of the 2010 Constitution and took over the mandate of the IIEC and the IIBRC.Footnote 39 The Independent Electoral and Boundaries Commission Act 2011 was then enacted as the first statute dedicated to the management and conduct of elections, outlining the Commission’s additional functions, including the use of appropriate technology in its activities.Footnote 40
Section 44(1) of the 2011 Elections Act provides for the use of technology in election processes. For the presidential election, the IEBC is required to transmit results electronically from polling stations to the constituency tallying centres (CTCs) and national tallying centres (NTCs) through the Results Transmission System (RTS). In addition, the IEBC is obligated to livestream and maintain a public portal for transparency and accountability.Footnote 41 By embracing technology in the electoral process and adhering to regulations that promote accessibility and inclusivity, the IEBC seeks to restore public trust and confidence in the electoral system, fostering a fair and credible electoral process in Kenya.
13.3.1 The 2013 General Election
The Kenyan public’s hope for credible elections was evident in the 2013 general election, where a record 12 million Kenyans, accounting for 85.9 per cent of registered voters, turned out to vote – the highest turnout since the reintroduction of multi-party politics in Kenya.Footnote 42 The election marked the first use of the Biometric Voter Registration (BVR) System, an electronically based national register, replacing the manual register known as the Green Book for voter identification. The BVR System was accompanied by the Electronic Voter Identification System (EVID) to verify voters’ identities.Footnote 43 The 2013 general election witnessed a closely contested race between two presidential candidates, Uhuru Kenyatta and Raila Odinga. After the election, Uhuru Kenyatta was declared the winner with 50.5 per cent of the votes. However, the outcome was quickly challenged in the consolidated petition of Raila Odinga and 5 Others v. Independent Electoral and Boundaries Commission and 3 Others.Footnote 44
A significant point of contention in the petition was the failure of the RTS and the BVR, which led to multiple variations in the electronic data generated.Footnote 45 In its judgment, the Supreme Court acknowledged the irregularities in data and information capture during the registration process, but deemed them insufficient to affect the credibility of the electoral process.Footnote 46 Despite acknowledging various irregularities, the Court found that the evidence did not demonstrate profound irregularities in the presidential election’s conduct that would invalidate the final result.Footnote 47 Thus, the petition was disallowed, and the Supreme Court upheld the election’s outcome.
In response to the discrepancies identified in the 2013 presidential election petition regarding the use of technology in managing and conducting elections, the Election Laws (Amendment) Act of 2016 was enacted to introduce legislative reforms related to election technology to address the issues. One crucial change was made to Section 39 of the Elections Act, requiring the mandatory electronic transmission of presidential election results from each polling station to both the CTCs and NTCs). The Act also mandated the IEBC to publish polling results forms on a publicly accessible online portal maintained by the Commission.Footnote 48
Another significant amendment was made to Section 44 of the Elections Act, which established an integrated electronic electoral system, the Kenya Integrated Elections Management System (KIEMS). This system was designed to facilitate various functions, including biometric voter registration, electronic voter identification and electronic transmission of election results. To ensure the integrity of the technology used, the amendment stipulated that the electoral system must be simple, accurate, verifiable, secure, accountable and transparent.Footnote 49 These amendments were aimed at enhancing the reliability and credibility of election processes by leveraging technology.
13.3.2 The 2017 General Election
The challenges faced by IEBC in ensuring the proper conduct of elections and curbing illegalities and irregularities during the 2013 general election resulted in a significant decline in voter turnout, from 85.9 per cent in 2013 to 77.5 per cent in the 2017 general election.Footnote 50 This reflects a growing sense of voter apathy. The 2017 general election was held on 8 August 2017, and it was anticipated that the IEBC would address the technological issues encountered in the 2013 election. To achieve this, the IEBC implemented the KIEMS for the first time.Footnote 51 While the electoral technology performed better than it did in 2013, technological challenges were still encountered, affecting the perceived credibility and legitimacy of election results.Footnote 52
Following the declaration of the incumbent president, Uhuru Kenyatta, as the winner by the IEBC returning officer for the presidential election on 11 August 2017, the opposition filed a petition, Raila Amolo Odinga and Another v. Independent Electoral and Boundaries Commission and 2 Others,Footnote 53 challenging the election’s fairness and legality. Numerous election irregularities and illegalities were cited, particularly in relation to election technology. The petitioners sought an audit and scrutiny of all presidential election forms and the election system, including Forms 34A, 34B, 34C, and the KIEMS kits, server(s) and website/portal.Footnote 54
In relation to the functions of Forms 34A, 34B and 34C, as per section 39(1C) of the Elections Act, upon tallying presidential election results at a polling station, the presiding election officer is to fill in the results in Form 34A and electronically transmit the scanned Form 34A to the CTC using the KIEMS kits. The returning officer at the CTC is then to verify and tabulate the results from the various polling stations in the constituency and then, using these, generate form 34B. The results are then to be sent to the NTC, where the chairperson of the IEBC is to follow the same process of verification and tabulation in generating Form 34C. Physical Forms 34A are then delivered to the NTC by the returning officers for comparison with the scanned forms.
The audit, supervised by the Supreme Court Registrar, ICT experts and agents of the parties, confirmed various discrepancies.Footnote 55 These include the fact that a significant number of the electronically transmitted election results were not accompanied by scanned images of Form 34A, contrary to Section 39(IC) of the Elections Act.Footnote 56 This illegality was said to be compounded by various issues: the fact that the results were announced on the basis of Forms 34B before receiving all the necessary Forms 34A; the results on the IEBC portal differed from those on the Forms 34B provided to the Court; there were variations between results displayed on the IEBC portal and those in Forms 34A and 34B; and manipulation of the IEBC electronic system by third parties that generated numbers for transmission to the NTC.Footnote 57 Indeed, the IEBC admitted that it had declared the election results before receiving results from 11,883 polling stations, 17 constituency tallying centres and authentic Forms 34A from 5,015 polling stations, which represented more than 3.5 million voters.Footnote 58 It was, therefore, the petitioners’ contention that the transmission of unverified results flouted the principles of free and fair elections provided for in Article 81(e) of the Constitution as read together with various provisions of the Elections Act, the Elections (General) Regulations and Section 25 of the IEBC Act.Footnote 59
These discrepancies cast doubt on the integrity of the electoral process and raised questions about whether the election truly represented the free expression of the people’s will, as envisioned in Article 38 of the Constitution.Footnote 60 Consequently, the Supreme Court made an unprecedented decision on 1 September 2017 to nullify the presidential election. Despite the efforts to improve the electoral process through technology, these discrepancies and irregularities raised concerns about the credibility and integrity of the election results. These issues further contributed to the prevailing sense of voter apathy, as citizens became more sceptical about the electoral process and its ability to reflect the true will of the people. As noted by political scientist Nic Cheeseman, misinformation in relation to the 2022 general election results appeared credible due to a history of failure of technology in 2013 and 2017, as it played into public expectations.Footnote 61
13.4 The Emergence of Disinformation and Misinformation: Impact on the 2022 General Election
In the lead-up to the 2022 general election, the IEBC worked diligently to address the challenges related to the use of technology in administering elections. But, at the same time, a concerning trend emerged, as politicians began adopting new strategies to influence political outcomes, capitalising on the rise of disinformation and misinformation. False narratives were strategically disseminated online by politicians with the intention of discrediting opposing candidates and gaining an advantage in the election. An example of this includes the spread of false claims, such as alleged defections or fabricated news of a candidate’s demise, often strategically timed on the day of the election or in the days leading up to it.Footnote 62 This development aligns with Cheeseman’s and political scientist Brian Klaas’ observation that it has become increasingly difficult to detect election rigging. This is because whenever monitors devise new strategies to counter tried and true rigging tactics – in the case of Kenya, leveraging technology to prevent election rigging – dictators and despots continue to innovate.Footnote 63
13.4.1 Examining the Phenomenon of Disinformation and Misinformation in Kenya
Beata Martin-Rozumiłowicz and Rasťo Kužel proffer an apt definition of disinformation as deliberately ‘false or misleading information that is created or disseminated with the intent to cause harm or to benefit the perpetrator’ and misinformation as ‘false or misleading information that is shared without the intent to cause harm or realization that it is incorrect’.Footnote 64 In the context of elections, deliberately false information is spread to achieve political or financial gains and is usually directed towards either individuals, or groups, institutions and processes.Footnote 65 This section places its central emphasis on the utilisation of disinformation and misinformation, used interchangeably with the term fake news – defined as ‘purposefully crafted, sensational, emotionally charged, misleading or totally fabricated information that mimics the form of mainstream news’Footnote 66 – as a strategic tool wielded by politicians and various stakeholders to manipulate and shape the political narratives and results within Kenya’s electoral landscape. The discussion also covers instances where credible news is illegitimately labelled as ‘fake news’.
Misinformation is ‘one of the greatest challenges facing democracy in our time’.Footnote 67 In democratic societies, information is a powerful tool, especially during election campaigns. Kenyan elections exemplify this phenomenon, with a significant rise in digital campaigning across popular social media platforms such as Twitter and Facebook, messaging apps such as WhatsApp, and the video-sharing app, TikTok, during general elections.Footnote 68 At one end of the spectrum, this increases opportunities for the participation of women and youth, coordination by civil society and opposition, and undermines governmental control of information flow.Footnote 69 Social media has become an inseparable part of African politics, where politicians leverage influential bloggers to bypass mainstream media and directly reach voters with their campaign messages.Footnote 70 However, at the other end of the divide, this approach often involves collaborating with global firms, alongside local tech-savvy teams that engage in disinformation campaigns against election opponents.Footnote 71
The potency of misinformation relies heavily on the extent of internet penetration within a country. As of 2022, reports indicated that internet penetration in Kenya stood at approximately 85 per cent according to Reuters, while Article 19, a human rights organisation, estimated it to be even higher at 93.7 per cent.Footnote 72 In addition, there were around 11 million social media users, and this number continues to grow daily.Footnote 73 Unfortunately, Kenya’s low literacy rate means that internet users can be susceptible to believing and sharing fake news, turning ‘uninformed voters’ into ‘misinformed voters’.Footnote 74
The prevalence of big data in this digital landscape opens the door for foreign companies to engage in user profiling and targeted election campaigns online, exploiting the ease with which individuals can be tracked through social media platforms. The utilisation of information in elections is not a novel concept, and some argue it is akin to ‘traditional media advertising’, which employs surveys and focus groups to identify the most effective approaches for pitching newspaper and TV advertisements to voters.Footnote 75 The use of data in elections and political campaigns is also likened to the data practices used in marketing and commercial advertising.Footnote 76 Data companies often claim that the collection of information for profiling purposes is solely aimed at tailoring campaign messaging to suit individual voters, but this assurance raises apprehensions regarding the extent to which voters’ actions and beliefs are recorded and manipulated to achieve specific outcomes for particular candidates. This concern is particularly pronounced in authoritarian contexts, where the ruling elite benefit more from such practices due to their greater access to campaign resources and the government’s ability to restrict the entry of employees from companies supporting the opposition by denying them visas.Footnote 77
Modern autocrats increasingly find digital manipulation of data and information to be a useful tool, not only because it offers distinct advantages but also because it falls into a legal grey area, escaping strict regulation as election rigging.Footnote 78 This makes it an appealing option for those seeking to influence elections discreetly. As Charles Fombad aptly observes, this strategy makes ‘old techniques such as vote-buying, ballot-box-stuffing and double-voting seem positively crude and outdated’.Footnote 79
Mounting concerns surround the increasingly sophisticated methods data companies use to collaborate with candidates in engineering electoral victories through leveraging information to their advantage.Footnote 80 In Kenya, Cambridge Analytica, known for its data-driven approach to ‘changing audience behaviour’, was reported to have been involved in President Uhuru Kenyatta’s 2013 and 2017 general election campaigns.Footnote 81 The company engages in mining user data from social platforms to target specific messages at individuals, based on profiles created from their internet usage, with the aim to encourage supporters to vote and discourage opponents from showing up to polling stations.Footnote 82 These tactics provide a significant electoral advantage to a party, yet they are not explicitly prohibited by electoral law in Kenya, and they are not typically categorised as election rigging in election observation reports such as the IEBC’s Post-election Evaluation Report.Footnote 83 This raises concerns about the lack of regulation and oversight regarding such practices, and the potential impact on the democratic process and the integrity of elections.
There has been a significant growth in the complexity of disinformation campaigns, in terms of both the content disseminated and the channels used for distribution. Much misinformation is hard to prove or disprove as the information appears highly plausible to a broad range of people. Studies have shown that misinformation is more effective where it plays into existing prejudices or assumptions about a candidate, for example where it involves claims of corruption, incompetence, moral bankruptcy or failures as a spouse.Footnote 84
In Kenya’s 2022 general election, TikTok emerged as a major purveyor of false information.Footnote 85 Other studies reveal that Facebook was predominantly used for spreading campaign messages, while WhatsApp proved highly effective for mobilisation due to its extensive user base.Footnote 86 Aside from these, so-called political keyboard warriors emerged. These are bloggers who strategically position themselves during political campaigns to either create a favourable image of a candidate or de-campaign their opponents by spreading misinformation, often in exchange for payment.Footnote 87 Bloggers also spread misinformation to motivate candidates to hire them to rewrite the narrative and frame them in a positive light.Footnote 88
The combination of these tactics raises legitimate concerns about the manipulation of information and its impact on the democratic process, particularly considering that research indicates at least 75 per cent of Kenyan news consumers struggle to discern between real and fake news.Footnote 89 Further, a survey conducted in connection with the 2017 general election uncovered that approximately 90 per cent of respondents reported encountering content they suspected to be fake news, while around 80 per cent had come across information that they believed was intentionally fabricated or false.Footnote 90
Unlike other rigging tactics commonly used in hybrid democracies like Kenya, disinformation stands out as a strategy available to both the government and the opposition to manipulate electoral outcomes.Footnote 91 Notably, there have been reports indicating that a group of Twitter influencers were allegedly paid by ‘shadowy financers’ in May and June 2021 to spread disinformation in favour of the government-backed Constitution of Kenya (Amendment) Bill 2020.Footnote 92 This Bill was developed by the Building Bridges Initiative (BBI) Steering Committee and was famously held to be an unconstitutional attempt at constitutional amendment by the Kenyan Supreme Court, having been initiated by the President rather than being citizen-driven, which is what a popular initiative ought to be.Footnote 93 The disinformation campaign involved the harassment and discrediting of journalists, judges and civil society activists on Twitter, harming the individuals targeted and also undermining the credibility of the institutions they represent, which are important pillars of democracy.Footnote 94 The disinformation campaign had a clear objective: to promote the BBI process and falsely portray the aforementioned group of people (journalists, judges and so on) as villains opposed to development, and as being funded by William Ruto, the then Deputy President, who opposed the amendment. The aim was to deceive people, through hashtags and photoshopped images, into believing that the trending opinions were widely popular, and to create a false narrative surrounding the BBI process.Footnote 95
Commentators emphasise that the utilisation of data in elections and political campaigns is highly intrusive to individuals’ privacy and gives rise to significant security questions. Moreover, such practices have the potential to erode trust in the democratic process, as citizens may question the authenticity and legitimacy of election outcomes when data manipulation is at play.Footnote 96 As the use of data and digital tactics in elections continues to evolve, it is crucial to strike a balance between the legitimate use of voter information for campaign messaging and safeguarding citizens’ privacy and democratic values.
Concern has also been raised about the lack of accountability of global companies that engage in political campaigns within local contexts. A notable example of this occurred during Kenya’s 2017 general elections, where campaigns conducted by Harris Media LLC insinuated that presidential aspirant Raila Odinga would be involved in forcibly displacing ‘whole tribes from their homes’.Footnote 97 In a country with a history of electoral violence and deep ethnic divisions like Kenya, such online campaigns have the potential to stoke the embers of ethnic hatred and exacerbate social tensions.Footnote 98
13.4.2 Disinformation and Misinformation in the 2022 General Election
The 2022 general election witnessed the lowest voter turnout in fifteen years, with a mere 65.4 per cent of the registered 22.12 million voters participating, signalling an alarming surge in voter apathy.Footnote 99 The deployment of the KIEMS technology played a pivotal role in BVR and, during election day, in voter identification and results transmission. Notably, this system was successfully deployed and showcased improvements compared to prior elections.Footnote 100 However, the overarching technological concern during this electoral cycle revolved around the proliferation of disinformation and misinformation across online platforms.
During the election period, a deluge of misinformation plagued social media platforms, including false claims of victory, alleged political kidnappings, conspiracy theories and targeted attacks.Footnote 101 A deeply concerning facet of the misinformation campaigns was the prevalence of sexualised gender attacks, particularly targeting female political aspirants, through platforms including Twitter, WhatsApp and TikTok. The disproportionately frequent attacks against women candidates further compounded the existing challenges they already grapple with during electoral campaigns.Footnote 102 This situation was exacerbated by a critical factor: although the IEBC had posted images of Forms 34A illustrating results from approximately 99 per cent of the 46,663 polling stations across Kenya, the Commission had refrained from tabulating the cumulative figures on its public portal. The physical Forms 34A were being transported to the national tallying centre situated in the capital city, Nairobi, for comprehensive verification and tabulation of the grand totals before the official results were slated to be announced by the IEBC chairperson.Footnote 103
This informational gap engendered a confusing tallying process, as different media platforms broadcast conflicting figures based on their individual tabulation of Forms 34A. For example, on 10 August 2022, Citizen TV showcased presidential candidate Odinga leading with 51.3 per cent of the vote, with William Ruto in close pursuit at 47.3 per cent. Concurrently, the Nation Media Group displayed Ruto in the lead with 50.7 per cent, followed by Odinga at 48.6 per cent. This scenario sowed confusion and provided fertile soil for the proliferation of false news narratives. As recent studies show, the spread of misinformation is also problematic when it is sustained in and through traditional media, where traditional media fails to challenge misinformation.Footnote 104
A Mozilla Foundation study highlights that the spread of misinformation was further amplified by tech platforms, contributing to the uncertainty surrounding election results. Despite assurances from these tech companies that they would address problematic content in the run-up to the elections, the proliferation of misinformation remained a significant issue.Footnote 105 This failure to effectively combat the dissemination of false information on their platforms contributed to public confusion and distrust, as voters struggled to distinguish accurate information from misleading claims. On this issue, Cheeseman aptly observes:
Even those of us well seasoned in analysing rigging claims have struggled to cope with the bewildering monsoon of misinformation that has rained down on social media. When you wake up every morning to a new set of messages about how the process was manipulated, it can be hard not to believe there must be something in them – even as every claim you investigate turns out to be unconvincing.Footnote 106
After the announcement of Ruto as President-elect by then IEBC Chairperson Wafula Chebukati on 15 August 2022, with a margin of 50.49 per cent in the presidential vote compared to the runner-up Odinga’s 48.85 per cent, numerous parties contested the result of the presidential race. This situation became more pronounced following a last-minute announcement by four IEBC commissioners who disavowed the verification and tallying process, asserting that the actual results diverged from those about to be declared by the chairperson.Footnote 107 This dramatic turn of events occurred just prior to the chairperson’s planned announcement of the results, and was replayed on social media by prominent political leaders, causing a heightened state of agitation among Kenyans, and fuelling a surge of conspiracy theories online. As noted by a recent study, misinformation in relation to election results in 2022 appeared credible not only due to a history of failure of technology, which played into public expectations, but also due to its perpetuation by authority figures.Footnote 108 Where senior political leaders repeated what was not true, it gave false reports a second life by giving the misinformation political legitimacy.Footnote 109
The consolidated presidential election petitions were titled Raila Odinga and 16 Others v. William Ruto and 10 Others; Law Society of Kenya and 4 Others (amicus curiae).Footnote 110 Similar to the preceding two post-2010 presidential election petitions, the contention arose that the deployment and utilisation of technology did not align with the prescribed constitutional and statutory standards.Footnote 111 In response, the IEBC asserted that the electronic election management system conformed to the constitutional criteria and that only authorised individuals could access the essential information, ensuring accuracy, completeness and protection against tampering, from both authorised and unauthorised sources.Footnote 112 Upon an order from the Supreme Court, the IEBC granted all petitioners supervised access to servers once the petition commenced. The Court also allowed for a thorough examination, recounting and scrutiny of the ballot boxes from the polling stations in question.Footnote 113
The Court found that there was insufficient evidence to support the assertion that the technology deployed in the election administration failed to meet the prescribed standard set out in Article 86(a) of the Constitution, which encompasses transparency, security, integrity and verifiability.Footnote 114 While acknowledging instances of KIEMS kit failures, the Court noted that manual verification successfully took place for the 235 polling stations where the KIEMS kits malfunctioned. It emphasised that Kenya’s election process is a blend of technology and manual methods, rendering the resort to manual voting for 86,889 voters compliant and not a disenfranchisement.Footnote 115
It was held that the petitioners did not present any evidence that met the requisite standard of proof to demonstrate unauthorised access to the electronic election management system. The Registrar of the Supreme Court’s report likewise failed to indicate any security breaches in the Results Transmission System by unauthorised individuals.Footnote 116 Addressing the petitioners’ claim of manipulation involving 11,000 Forms 34A before being uploaded onto the IEBC’s online public portal within eight minutes, the Court termed this assertion as practically implausible, as it would necessitate intricate coordination of people and alteration by individuals and machines across all 11,000 polling stations in question.Footnote 117
In addition, the claim that certain Forms 34A presented to agents differed from those on the public portal was dismissed for lack of credible evidence. The Court disallowed the use of Forms 34A provided in affidavits by two advocates representing the first petitioner, which deviated significantly from the originals and certified copies on the public portal.Footnote 118 The Court reprimanded the advocates for presenting misleading or fabricated evidence in the judicial process. The Court also noted instances of false and misleading information in the affidavit of prominent Kenyan corruption whistle-blower, John Githongo, including forgeries and inadmissible hearsays. Githongo’s application to withdraw the affidavit was denied by the Court.Footnote 119 The Court’s frustration with the petitioners’ assertions, which ultimately held no evidential weight despite a rigorous trial process and thorough investigation, prompted the Court to label these claims as mere ‘hot air’ and liken their pursuit to a ‘wild goose chase’.Footnote 120 Concluding that the evidence was insufficient to warrant the annulment of the presidential election, the seven-judge Supreme Court bench unanimously dismissed the consolidated petition, thereby upholding the IEBC’s declaration of Ruto as the president-elect.Footnote 121
As discussed, a concerning increase in the use of misinformation was observed after voting and during the hearing of the petition. It was argued that this increase was attributed to the fact that since voting was over, the potential for backlash from this misinformation affecting voting behaviour was reduced.Footnote 122 Even more alarming was the alleged verification of the misinformation by trusted interlocutors such as Githongo, which made the misinformation more likely to be believed by Kenyans. However, little attention was paid to the potential impact of this misleading content on voter sentiment and behaviour, as well as the legitimacy of the new government and credibility of the IEBC.Footnote 123 The unchecked dissemination of misinformation can have serious consequences for the democratic process and the extent to which Kenyans can trust future electoral processes to be credible, warranting a closer examination of its implications on voter decision-making and the overall electoral landscape.
13.4.3 Impact of Disinformation and Misinformation during the 2022 General Election
With the rise in disinformation and unjustified labelling of credible news sources as ‘fake news’, there has been a concerning decline in public trust, affecting key institutions such as the traditional media (TV, radio and newspapers) and the IEBC. Kenya is already categorised as a low-trust society, with both political and social trust being at low levels.Footnote 124 Trust is a crucial element in the functioning of a democracy, and its absence poses a significant barrier to the consolidation of democratic values and principles.Footnote 125
The erosion of trust can be traced back to the aftermath of the 2007/2008 post-election violence and the subsequent disputed elections in the following electoral cycles, as earlier discussed. These events shook public confidence in the electoral process and raised doubts about the credibility of election outcomes. In particular, the nullification of the 2017 elections, which was overseen by the same chairperson as the 2022 elections, Wafula Chebukati, further contributed to the diminished trust in the IEBC. On this, it has been argued that:
Misinformation thrives, of course, when key institutions cannot be trusted and when it is repeated by respected figures. In the context of the Kenyan elections, many citizens went into the campaign with limited trust in Chebukati because he had presided over the 2017 elections that was nullified by the Supreme Court. Their trust was further eroded – some might say exploded – when, just as he was about to read out the result, four ‘rebel’ commissioners gave a rival ‘presser’ saying they could not stand behind the results. It did not matter that when the four commissioners subsequently provided details of their concerns it transpired that they rested, in part, on a mathematical misunderstanding so basic that it called into question both their capacity and their motivations. The sight of the IEBC once again at war with itself was enough to give credence to the claims the Commission had been politically captured and had fabricated the entire process.Footnote 126
The growing distrust of Kenya’s traditional media has created an environment conducive to disinformation and misinformation. Studies reveal a decline in public trust in mainstream media, which can be attributed to its perceived affiliation with ethnic and class interests.Footnote 127 When media outlets display discernible bias towards a particular candidate or political agenda, it further erodes public trust.Footnote 128 The concentration of media ownership in the hands of the political elite has strengthened the perception that the media is captured, prioritising the interests of political and corporate elites over its role in serving the public interest.Footnote 129 The media’s portrayal of varied election results following the 2022 elections only reinforced this perception of capture by political elites. The contradictory reporting by different media outlets contributed to the growing scepticism among the public, as they struggled to determine the accuracy and impartiality of the information presented.
Dubbed as ‘githeri media’ in a derogatory manner after its coverage of the 2017 elections, traditional media has lost its position as the primary agenda setter in the country, yielding ground to social media.Footnote 130 The term ‘githeri’ refers to a simple local dish made of maize and beans boiled or fried together. The label ‘githeri media’ is considered a symbol of disapproval, highlighting mainstream media’s fixation on trivial matters. This was evident during the 2017 general election when traditional media incessantly covered a man eating ‘githeri’ while overlooking more critical issues related to electoral irregularities and malpractices that were being discussed on social media platforms.Footnote 131 This focus on the trivial over substantive electoral matters further eroded public trust in traditional media. However, the decline in public trust in media is an Africa-wide phenomenon.Footnote 132
With the public’s increasing scepticism towards traditional media, authoritarian figures and ruling elites find it easier to maintain their grip on power. They capitalise on discrediting independent media outlets that challenge the official state narrative. By undermining the credibility of those who question their authority, these elites can control the flow of information and shape the public perception in their favour.Footnote 133 Alongside the credibility deficit caused by the weakened trust in traditional media is the creation of a stratified society in terms of believability of information. As George Ogola correctly posits:
Significant too are the hierarchies that have emerged online. Those with the largest following tend to be politicians, celebrities, media personalities and those familiar with the digital literacy necessary to build a following. What these people say dominate conversations. The nature of agenda setting is thus personality driven. This is one of the most fundamental flaws of the online communication economy, and one that demands that we develop a deep circumspection.Footnote 134
Moreover, the rise of fake news and digitally manipulated information has given rise to what is known as a ‘post-truth’ world,Footnote 135 where objective facts are heavily contested, and voters may be swayed more by emotions than by verifiable information when making decisions on certain issues or leaders. In this environment, all information can be subject to doubt and scrutiny, allowing election-rigging claims to be disputed and enabling rigged elections to be falsely portrayed as legitimate.Footnote 136
In the past, authoritarian leaders had to exert significant effort to suppress information to maintain control. However, researchers now observe that many voters seem content with only receiving information provided by the government, effectively granting the government complete authority over defining truth and falsehood. This control over information empowers governments to manipulate facts at will, leading citizens to accept blatantly false statements.Footnote 137 As a result, the flow of information becomes so controlled by authoritarian regimes that they may no longer need to engage in overt rigging at the ballot box. Instead, they rely on indoctrination and fear tactics to achieve the same desired outcomes from the electorate.Footnote 138 In such a post-truth landscape, the erosion of public trust in reliable sources of information and the dissemination of disinformation create challenges for democratic processes. The acceptance of false narratives and the rejection of objective facts may hinder the ability of citizens to make informed choices, leading to the entrenchment of authoritarian rule and undermining the principles of free and fair elections.
Nevertheless, while fake news and misinformation remain challenges for democracies globally, misinformation did not reach the heightened levels many had feared in the 2022 Kenyan elections. Save for one doctored video of Ruto apparently attacking non-Kalenjin communities, which was quickly discredited by mainstream media,Footnote 139 the harder elements of misinformation such as use of deep fakes were absent. Kenya’s 2022 elections demonstrated that social media, through which a large portion of misinformation is spread, has not entirely changed the nature of elections in Kenya. Success in elections requires great investment in interpersonal campaigns, meaning that a candidate cannot rely exclusively on digital campaigns to sway the voting population. Nevertheless, it is unclear how much digital conversations affect behaviour offline or influence voting decisions.Footnote 140 Therefore, the intersection between digital media and pavement media in influencing opinions is an area that requires further investigation.
13.5 Detecting and Combating Disinformation and Misinformation: Legal Landscape and Engaging Stakeholders for Electoral Integrity
Given the detrimental impact of disinformation and misinformation on the foundations of democracy and the exercise of political rights, there is an urgent need to implement robust strategies for detecting and combating this pervasive problem. This urgency is underscored by Kenya’s historical context, marked by instances of election-related violence and manipulation, where the amplification of misinformation exacerbates the issues of voter disengagement, ethnic divisions and a general sense of bewilderment among the populace. In this section, we explore various recommendations that have been proposed, including legal, policy and institutional measures. Education and media literacy initiatives are also crucial to empowering citizens to discern accurate information from fake news and are discussed here. The section also highlights the significance of transparency, accountability and responsible use of technology to safeguard the legitimacy of elections and democratic processes. It thus emphasises the need to balance the use of technology, freedom of speech and media regulation.
13.5.1 Legal and Legislative Approaches to Combating Disinformation and Misinformation
In examining the legal and legislative measures to address fake news, it is essential to begin with Kenya’s Constitution. Article 33(1) of the 2010 Constitution guarantees the right to freedom of expression, including seeking, receiving and imparting information. However, this right is not absolute, as Article 33(2) clarifies that it does not protect expression that spreads propaganda for war, incites violence, constitutes hate speech, advocates ethnic incitement leading to harm or vilification of others, or promotes discrimination based on protected characteristics. In the context of Kenya’s elections, this would specifically involve combating the spread of ethnic hate speech, incitement to violence and defamatory statements that damage the reputation of others (Article 33(3)). Thus, a delicate balance is required between the right to freedom of expression, which includes freedom of speech, and protecting individuals from the harmful effects of misinformation and disinformation. In addition, Article 31 on the right to privacy also places limitations on freedom of expression by safeguarding individuals’ right to protect information about their families, private affairs and private communications.
Mainly applicable to journalists, Article 34(1) of the Constitution further guarantees the freedom of the media and the independence of electronic, print and other types of media. However, the limits set out in Articles 31, 33(2) and 33(3) also apply to the media. Article 34(3) allows for necessary regulation of the media to strike a balance between media freedom and preventing the dissemination of harmful information. The limits set in the aforementioned articles have, at times, been exploited to enable the unjust regulation of the media under the pretext of curbing unprotected forms of expression. Nonetheless, the courts have periodically struck down such provisions based on constitutional grounds.Footnote 141
An additional constraint on freedom of expression and the media is encapsulated in Article 24 of the Bill of Rights, which constitutes the general limitation clause. Article 24(1) provides that fundamental rights ‘shall not be limited except by law, and then only to the extent that the limitation is reasonable and justifiable in an open and democratic society based on human dignity, equality, and freedom’. Subsequently, the latter part of the limitation clause (Articles 24(1)(a)–(e)) introduces a proportionality test, delineating five factors that must be considered to determine the appropriateness of limiting a particular right. These factors encompass ‘the nature of the fundamental right or freedom’, ‘the significance of the purpose of the limitation’, ‘the character and scope of the limitation’, ‘the necessity to ensure that an individual’s enjoyment of rights and fundamental freedoms does not impair the rights and freedoms of others’ and ‘the connection between the limitation and its purpose, along with the presence of less restrictive means to achieve the intended goal’. These constitutional provisions are reinforced by Article 27(4), which essentially prohibits misinformation and disinformation directly or indirectly discriminating against individuals based on various grounds, ‘including race, sex, pregnancy, marital status, health status, ethnic or social origin, colour, age, disability, religion, conscience, belief, culture, dress, language or birth’.
In addition to the constitutional measures, various statutes penalise the dissemination of certain kinds of fake news. One such is the Data Protection Act of 2019, which in section 2 defines data as any processed and recorded information by means of equipment operating automatically in response to instructions or as part of a relevant filing system. Section 25 of the Act sets out the principles and obligations of data protection, emphasising the respect for the right to privacy of the data subject and that personal data should be ‘collected for explicit, specified and legitimate purposes and not further processed in a manner incompatible with those purposes’.Footnote 142 The Act also requires data controllers or processors to ensure that personal data is accurate and kept up-to-date, with measures taken to promptly erase or rectify any false or misleading data.Footnote 143 These provisions serve to protect against fake news that violates individuals’ privacy through the dissemination of personal data and the use of altered photos and videos, commonly known as deepfakes, to spread misinformation and disinformation.Footnote 144 As noted above, however, the use of deepfakes in the 2022 general elections was limited.
Section 26 of the Act further provides for the rights of erasure, correction or deletion of false or misleading data, as well as the right to be informed and consent to the collection and use of personal data. Meanwhile, section 28 permits the collection of data from public records when the same has been made public or with the data subject’s consent from another source. However, sections 28 and 29 also mandate that reasonable steps must be taken to notify the data subject that their data is being processed, and failure to do so is considered an offence. These provisions aim to address the spread of fake news that involves the manipulation of personal data and provide individuals with the right to manage their data, ensuring accuracy and protection from misuse.
Second, the Computer Misuse and Cybercrimes Act of 2018 addresses the intentional publication of falsehoods and misinformation. In section 22(1) of the Act, the deliberate dissemination of false information is criminalised, and those found guilty can face a fine of 5 million shillings (Ksh; $50,000) or a maximum of two years’ imprisonment, or both. Furthermore, section 22(2) of the Act limits freedom of expression, in accordance with Articles 24 and 33 of the Constitution, with regard to intentional publication of falsehoods and misinformation. The Act specifically targets content that is likely to incite violence, promote war, propagate ethnic hate speech or discrimination, and negatively affect the rights or reputations of others. Section 23 further addresses the intentional publication of false information with the specific intention of causing panic and chaos, or inciting violence. Those found guilty of spreading such false information can face severe penalties, including a fine of Ksh 5 million or a maximum of ten years’ imprisonment, or both. These legal measures serve to deter the spread of fake news and misinformation in the digital realm, ensuring accountability for those who intentionally disseminate harmful false information.
The constitutionality of the Computer Misuse and Cybercrimes Act was contested in the case of Bloggers Association of Kenya (BAKE) v. Attorney General and 3 Others; Article 19 East Africa and Another (Interested Parties).Footnote 145 With regard to sections 22 and 23 of the Act, which criminalise the dissemination of false information, the petitioners argued that truth was not a prerequisite for free speech. They further contended that these laws, which aimed to regulate truth using terms such as ‘false, misleading or fictitious data or information’, could infringe upon the freedom of expression.Footnote 146 In response, the Court reasoned that ‘the state has a legitimate interest in ensuring the safety and integrity of information and the protection of its citizens against cybercrimes’, especially when the freedom of expression impacts not just individual rights but also the broader public.Footnote 147
Considering the rapid dissemination of misinformation or fake news on the Internet, leading to fear, panic and even potential threats to national security, the limitations imposed by sections 22 and 23 of the Act in the form of the offence of publishing false information over a computer system were found to be reasonable, justifiable and in line with the constraints set out in Articles 24, 31, 33(2) and 33(3) of the Constitution.Footnote 148 These provisions permit publishers to express themselves as long as their expression does not harm others and avoids constituting cyber libel.
Addressing the claim that the terms ‘false, misleading or fictitious data or information’ are vague, overbroad and would have a chilling effect on speech by, inter alia, journalists, activists, academics and politicians, Justice Makau held that the language used in sections 22 and 23 is precise and clear when interpreted contextually according to statutory interpretation principles.Footnote 149 He further noted that the term ‘false’ is readily understood in plain English, and does not necessitate a legal definition. Its application in section 23 was said to be evident – to criminalise the dissemination of false information with the intention to induce panic, chaos, violence or tarnish an individual’s reputation.Footnote 150 As a result, the Court upheld the constitutionality of these two provisions.Footnote 151 However, civil society actors continue to call for the repeal of sections 22 and 23 of the Act. They argue that ‘The vague prohibition of “false”, “misleading” and “fictitious” data is highly subjective, and this law has been used to harass journalists, bloggers and activists in the past’.Footnote 152 They thus recommend the removal and replacement of these sections with less intrusive measures to combat disinformation.Footnote 153
Third, the Defamation Act offers civil remedies in cases where fake news qualifies as libel, as outlined in Section 7(3).Footnote 154 It is also important to note that Section 7(3) specifies that the Act does not protect the publication of any matter that is not of public concern and that does not serve the public benefit. The Act ensures that the law supports the freedom of expression regarding matters of public interest, while also holding individuals accountable for spreading false information that can harm others or undermine public trust in the media.
Fourth, Section 66(1) of the Penal Code criminalises the publication of false statements, rumours or reports that could potentially induce fear, alarm or disrupt public peace.Footnote 155 This section designates such actions as misdemeanours and assigned guilt to those perpetrating them. Nonetheless, if the accused person can demonstrate that they took reasonable measures to verify the accuracy of the news before publication, believing it to be true, Section 66(2) provides them with a legal defence.
In the case of Cyprian Andama v. Director of Public Prosecutions & 2 Others; Article 19 East Africa (Interested Party), the petitioner challenged the constitutionality of section 66(1).Footnote 156 The petitioner argued that the wording of this section, which states, ‘likely to cause fear and alarm to the public or to disturb public peace’, was overly vague and its coverage too broad, thus allowing for subjective interpretations and potential misuse in charging individuals. This, the petitioner contended, violated their Article 33 right to freedom of expression.Footnote 157 Furthermore, it was asserted that the provision was so unclear and uncertain that individuals could not discern the boundaries of permissible communication, potentially ensnaring both innocent individuals and those who were not.Footnote 158 The central issue before the High Court revolved around whether the limitations on freedom of expression outlined in Section 66 of the Penal Code could be deemed reasonable and justifiable in a free and democratic society, as mandated by Article 24 of the Constitution, which sets conditions for limiting fundamental rights and freedoms.Footnote 159
Justice Korir’s judgment established that Section 66 allowed for the conviction of individuals merely for making statements that were considered untrue, without placing the burden on the prosecutor to demonstrate the statement’s falsehood. This, in turn, created an atmosphere of trepidation surrounding the exercise of freedom of expression.Footnote 160 Moreover, the section was deemed excessively broad as it could potentially prohibit not only the publication of false statements but also opinions that were honestly believed to be truthful, violating the rule that legal provisions establishing criminal offences are to be clear, concise and unambiguous.Footnote 161 The Court also noted that the Defamation Act offered a less restrictive means of safeguarding the reputation of individuals who had been unfairly defamed. It provided a civil remedy to restrain those intending to damage others’ reputations by inflicting monetary penalties through the award of damages.Footnote 162 Consequently, the Court ruled that Section 66 was an unjustified violation of Article 33 and, therefore, unconstitutional.Footnote 163 A cautious approach is thus necessary, as the aforementioned laws, which ostensibly limit freedom of expression and speech, have sometimes been exploited by the government to suppress critics and impede the open discourse of political matters, especially in the context of elections.Footnote 164
While Kenya leads the region in the regulation of data protection and abuse of social media platforms, it is apparent that electoral law, as currently formulated, does not regulate the increased use of social media in elections. While election laws prohibit campaigning two days before a general election, there is no law that bans candidates from purchasing advertising on social media platforms up until election day.Footnote 165 Also of concern, online advertising is not adequately regulated as the Electoral Code of Conduct does not cover online activities. Since the IEBC has proposed to undertake a review of its Electoral Code of Conduct following several court challenges arising from the 2022 elections,Footnote 166 we propose that the IEBC engage stakeholders in the development of specific regulations for online campaigns, with attendant sanctions also included in the Election Offences Act.Footnote 167 The Kofi Annan Foundation similarly recommends signing of the Digital Pledge by all political actors to stop the spread of misinformation.Footnote 168
13.5.2 Institutional Measures to Combat Disinformation and Misinformation
Institutional measures are equally important in tackling disinformation and misinformation. Social media sites and tech companies have an obligation to moderate content to reduce misinformation and disinformation. However, platforms such as Facebook have been criticised on the basis that the allocation of resources towards misinformation (estimated at 87 per cent in 2021) is disproportionately allocated to English-language content, yet only 9 per cent of its users actually speak English.Footnote 169 Odanga Madung attributes this to an unwillingness by tech platforms to ‘devote time and resources to an electoral process outside their cultural context’.Footnote 170
Furthermore, Article 19 has expressed concern over the lack of feedback incorporation from local communities in the content moderation strategies of social media companies. As a result, these strategies often fail to consider the cultural, social, historical, economic and political contexts of the communities they affect.Footnote 171 Most language moderation tools primarily cater to English, rendering them ineffective in instances where local slang or a mixture of multiple languages like Kiswahili and English are used. Further, these tools struggle to interpret statements involving derogatory metaphors or those heavily dependent on contextual understanding.Footnote 172 The absence of comprehensive content moderation facilitates the spread of misinformation and exacerbates its impact on electoral processes.
In 2022, the IEBC entered into memorandums of understanding with the Media Council of Kenya and tech companies, such as Facebook, on regulating online content. It may be necessary to create a coalition of actors to address concerns in this area. Having successfully convened political actors under the Political Parties Liaison Committee, IEBC can use existing goodwill to convene social media platforms to address digital threats to electoral integrity. As proposed by the 2020 study by the Kofi Annan Foundation, having such a coalition would allow for the creation of ‘cross-platform strategies for detecting and limiting the reach of weaponized disinformation and hate speech’.Footnote 173 Collaboration between the IEBC, online platforms, such as Meta and Twitter, along with local information fact-checking organisations, presents an effective strategy to combat disinformation and misinformation using both reactive and proactive measures. This can involve actions like suspending accounts linked to disinformation campaigns. In addition, implementing measures such as Twitter’s suspension of the trending feature during elections, as observed in Ethiopia during a period of rising disinformation and hate speech, can contribute significantly to countering the issue.Footnote 174
13.5.3 Non-Legal Initiatives to Combat Misinformation and Disinformation
Several initiatives have been undertaken to address the question of fake news. Africa Check is a civil society initiative that works across Africa to verify news content and political statements and is part of the International Fact Checking Network (IFCN) which brings together fact-checkers and advocates for factual information, working globally to combat fake news.Footnote 175 However, these sites get less visibility than those that spread false information. This limits their reach in addressing disinformation and misinformation.Footnote 176 Moreover, a ‘tidal wave of misinformation’ makes it difficult to keep up with fact-checking and challenge fake news.Footnote 177 Therefore, a collaborative approach aimed at enhancing the capacity of organisations already engaged in real-time information fact-checking, particularly in the context of elections and crucial political matters, becomes essential.Footnote 178
Further, the use of artificial intelligence (AI) tools to track misinformation in relation to elections has been demonstrated to be ineffective due to the language settings of these tools as well as their inability to identify context-specific misleading information. Existing tools have been criticised for not being tailor-made to local contexts. It is recommended to find ways of refining AI tools to make them better able to address local languages, in collaboration with local actors. However, since it is difficult to recall misinformation once it achieves virality, it is recommended to increase human review for accounts or posts that threaten the integrity of elections.Footnote 179
Education and media literacy initiatives are also crucial to empowering citizens to distinguish accurate information from misinformation, thereby addressing the human vulnerability that enables the propagation of fake news. As an example, the cooperative effort among stakeholders to provide educational initiatives aimed at fostering critical thinking in the digital realm can be exemplified by Meta’s ‘trusted partner’ programme. This programme serves to assist civil society in mitigating online and offline risks by offering expert insights and recommendations.Footnote 180 Increasing critical thinking and media literacy can make individuals more resilient to the influence of false information and less susceptible to manipulation. Importantly, it is reported that voters who spend more time on social media platforms tend to have a higher capacity for fact-checking and display a greater level of scepticism.Footnote 181 This heightened awareness makes them less susceptible to disinformation and misinformation campaigns.
In addition, debunking of misinformation related to the 2022 elections also limited its spread on digital platforms and vastly reduced its effect. Even as more work is done in retraining AI models to make them suitable for local contexts, where political actors, traditional media and other influential personalities call out misinformation frequently, it increases the level of scepticism among the voting population, making it harder for them to be deceived.Footnote 182
13.5.4 The Need for a Balancing Exercise
A key aspect to consider is the balance between the effective use of technology in elections and the regulation of abusive behaviour. As this chapter has discussed, technology has the potential to enhance the integrity and credibility of elections, but it can also be exploited to spread disinformation and manipulate public opinion. Striking a balance between leveraging technology for democratic advancement and curbing its misuse is essential. Furthermore, there is a delicate balance to be struck between upholding privacy and security rights and protecting freedom of expression and political rights. Any regulatory measures should carefully navigate these contrasting rights to avoid unnecessary restrictions on freedom of speech while safeguarding against the misuse of data for disinformation and misinformation campaigns.
13.6 Conclusion
As noted, technology can serve as a tool to enhance democratic legitimacy and combat election rigging when properly implemented and managed. While social media is now an integral part of our elections, and cannot be disregarded, the dark side of the use of technology to spread disinformation and misinformation remains a threat to democratic trust and legitimacy, and its effects are yet to be fully appreciated. To restore public trust in key institutions protecting democracy, and in the integrity and legitimacy of elections, a comprehensive and multifaceted approach is necessary to combat disinformation and misinformation. Legal, policy, institutional and educational measures should work in harmony to preserve the integrity of elections, and protect democratic values. Striking the right balance will be key to fostering an informed and engaged citizenry, ultimately strengthening democratic processes and rights, and preserving the democratic values of open debate, informed decision-making and the safeguarding of public trust in the democratic process.