Hostname: page-component-857557d7f7-h6shg Total loading time: 0 Render date: 2025-11-21T16:52:28.211Z Has data issue: false hasContentIssue false

The Disinformation Dilemma: Moderating Political Satire Under the DSA in Light of the European Convention on Human Rights

Published online by Cambridge University Press:  18 November 2025

Therese Enarsson*
Affiliation:
Department of Law, Umeå University, Umeå, Sweden
Rights & Permissions [Opens in a new window]

Abstract

This article examines the growing tension between protections for political satirical expression under article ten of the European Convention on Human Rights (ECHR) and emerging European regulatory efforts to combat disinformation through content moderation in the Digital Services Act (DSA). In this study, the case law from the European Court of Human Rights (ECtHR) have provided insights into the scope and contours of protected political satire under the ECHR, showing that the Court’s jurisprudence indicates strong protection for political satire. Meanwhile, in the social media landscape today, satire is merging with disinformation, not least around elections, potentially creating a “loophole” to spread malicious disinformation disguised as satire. Both human and automated moderation systems can easily misclassify such content, and the risks for over-removal as well as under-removal are imminent. This article therefore argues that while protection of satire and parody is essential in a democratic society, the use of it for malicious purposes speaks to the need for regulatory clarifications on how to conduct efficient content moderation to avoid over-moderation and a chilling effect on political satire, while still identifying and mitigating risks under the DSA.

Information

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

I. Introduction

1. Background

In recent years, increasing demands have continuously been placed on Very Large Online Platforms (VLOPs) to moderate content on their platforms.Footnote 1 The most significant example being the Digital Services Act (DSA) which includes demands on VLOPs to not only conduct content moderation of illegal and harmful content, but also to protect fundamental rights in the process.Footnote 2 The DSA, introduced by the European Commission, builds on the foundation of the e-Commerce Directive (2000/31/EC) and aims to harmonise emerging national regulations governing the online space. Its overarching goal is to foster a safer and more trustworthy digital environment for users while addressing the broader societal risks posed by major online platforms. A central objective of the DSA is to ensure that all participants in the digital ecosystem contribute to a “safe, predictable and trustworthy online environment,” which is seen as essential for enabling citizens to fully exercise their fundamental rights.Footnote 3

A focus of the DSA – particularly for VLOPs – is the mitigation of harmful content like disinformation, recognising its potential to undermine democratic discourse and public trust.Footnote 4 As early as 2018, the European Commission released a communication on tackling disinformation within the European Union (EU). Here, it is explicitly stated that “[d]isinformation does not include reporting errors, satire and parody, or clearly identified partisan news and commentary” and that all actions taken to combat disinformation “[…] should strictly respect freedom of expression and include safeguards that prevent their misuse, for example, the censoring of critical, satirical, dissenting or shocking speech.”Footnote 5 In this communication, the Commission also reminded the reader, with a footnote connected to that passage, of the highly impactful case of Handyside v. United Kingdom from the European Court of Human Rights (ECtHR), stating that even shocking or disturbing expressions are protected under Article 10 under the right to freedom of expression in the European Convention on Human Rights (ECHR).Footnote 6 Since then, the DSA has come into force, pushing for risk assessment and mitigation of harmful content like disinformation,Footnote 7 and the Strengthened Code of Practice on Disinformation (the Code),Footnote 8 is now a formal code of conduct under the DSA as of July 2025.Footnote 9 This code of conduct further highlights the need to address disinformation on social media platforms, reiterating that disinformation does not include satire and parody.Footnote 10

Research into social media practices has shown the increasingly important and complex role played by humour and satire in the spread of harmful content online, such as disinformation, not least in political contexts. Satire, in forms such as memes, provocative videos, or even deepfakes portraying politicians, can be part of humorous political debate and exaggeration. It can also contribute to misleading or even manipulating the public as part of harmful disinformation.Footnote 11 This is a dilemma.

The demands in the DSA are vague when it comes to how and to what extent VLOPs should restrict harmful disinformation, but the requirements extend beyond clearly illegal content linking disinformation instead with societal and democratic risks. VLOPs might have to assess and mitigate such risks in accordance with Article 34 and Article 35 of the DSA.Footnote 12 The fact that disinformation can be harmful as well as humorous and satirical, might put a strain on VLOPs to balance the right to freedom of expression for individuals and the risks stemming from the spread of political disinformation on platforms.

To help navigate this tension, an analysis of the ECHR and case law from the ECtHR is relevant for identifying what expressions are protected as satire and why. As the ECHR is the prime instrument for human rights in Europe, it provides important guidance on how political satire is to be understood in the context of freedom of expression under the Convention. According to Article 52 (3) of the EU Charter of Fundamental Rights (the Charter) the ECHR serves as a baseline level of protection of human rights under Union’s law, and Union law can only provide more extensive, not more restrictive protection. In Article 6.3 of the Treaty on European Union (TEU), it is also specified that the fundamental rights guaranteed by ECHR constitute general principles of Union’s law. Therefore, when future decisions on compliance with the DSA are made by Luxembourg’s European Court of Justice (ECJ), the ECJ will have to consider the rulings of the ECtHR, making them an interesting object to study in light of the DSA.

2. Aim, qualitative legal analysis and contribution

The aim of this article is to contribute to a normative clarification to inform the content moderation of VLOPs on satirical disinformation that is political in nature – content that might fall both under the obligation to moderate risks connected to its dissemination under the DSA, and the protection of freedom to expression under the ECHR. This understanding is essential not only to ensure the protection of satirical expressions but to enable the identification of disinformative satire that can be seen as a risk for society. In turn, this will inform the necessary balancing act for moderation systems in order to understand and comply with demands under the DSA and the ECHR.

Initially, this study begins by presenting overlapping markers between disinformation and satire, as well as the demands for moderation of harmful content under the DSA. This will be used as a backdrop to frame the study, illustrating the complexity of political satire in a highly digitally advanced society, where technological advancements have made it easier to create and disseminate satire, such as satirical memes and deepfakes, at scale. This framing is needed to understand the interplay between content moderation of risks under the DSA and the protection of political satire under the ECHR.

The legal analysis will be conducted using a qualitative analysis of legal material, from established methods of interpretation based on the hierarchy of the legal sources,Footnote 13 but the main legal source analyzed will be case law of the ECtHR. The case law focus on how the ECtHR identifies and understands satire, in relation to Article 10 and the protection of freedom of expression. As the ECJ is required to consider the principles established by the ECtHR when interpreting the DSA in future rulings, an analysis of the Strasbourg Court’s reasoning offers valuable insight into the contours of protected satire under the ECHR, and what constitutes a reasonable or even warranted restriction of it in light of the right to freedom of expression.

The analysis will finally draw upon these findings, analyzing the ECtHR’s standing on satire in relation to the risk mitigation regime under the DSA. The findings will enhance the understanding of content moderation and disinformation in a digital environment characterised by memes, deepfakes and satire, adding to research in a still emerging field.

II. Framing satirical disinformation

1. Overlapping markers for disinformation and satire

Satire, deepfakes and disinformation can look similar, using exaggeration, impersonations and even distortion of visual and textual content online, to make a point or convey a message. Examples of this overlap were apparent with regard to the COVID-19 vaccination debate, where disinformation about the vaccine was spread using several means, including memes. Even if the purpose of those memes was to spread a humorous picture on social media, they could contribute to already existing disinformation campaigns.Footnote 14 Memes are an easy way to quickly create and share content, but because they are images rather than text, they are harder for automated moderation systems to detect. Since memes keep it short, and preferably funny and witty, there is no room for a nuanced message, allowing them to be (intentionally or unintentionally) misleading.Footnote 15

Memes and other satirical content have also been used to influence elections, using humour to evoke emotional responses from those who interact with them.Footnote 16 In political and electoral contexts, political satire has always been an important tool for delivering critique in a humorous way, but social media has amplified the reach and spread of this content.Footnote 17 This can also serve malicious purposes, such as using memes to disseminate disinformation.Footnote 18

To complicate matters further, deepfakes have in a real way entered our social media feeds. Deepfakes, now addressed in the Artificial Intelligence ActFootnote 19 (the AI Act), are understood as depictions or portrayals of a real person – so realistic that they could fool someone watching – created by using AI to manipulate content.Footnote 20 Deepfakes can be used to impersonate political figures, sometimes fooling parts of the public into believing politicians have said or done things they have not actually said or doneFootnote 21 Due to the many available pictures and videos of politicians, they are an easy target of deepfakes, which rely on available content in order to replicate someone’s voice, movements or mannerisms.Footnote 22

On the AI Act’s four-step scale, ranging from unacceptable and high risk, down to limited or minimal risk, deepfakes are classified as limited risk. This classification affects what measures must be taken, and in the AI Act, Article 50 (4) it is now stated that when a deployerFootnote 23 creates a deepfake, they have to disclose that it is manipulated or artificially generated. However, such labels can easily be lost as images or videos are shared and spread online, causing confusion or misconceptions.Footnote 24 This has even been called a legal loophole or grey area, especially when parody and humour are used as a shield when spreading deepfakes with malicious intent.Footnote 25 Concerns have also been raised regarding this classification, with some advocating for a higher level of risk classification due to the potential harms of deepfakes.Footnote 26

Hence, the line between disinformation and satire in the form of text, memes, or deepfakes can be blurred, but obviously satire and disinformation are different things. Satire can be obvious parody, which is needed in any open and democratic society to add political or societal commentary. Disinformation, on the other hand, must be understood as aiming to cause harm.Footnote 27 The difference is the intent. Where satirical memes or deepfakes are aimed at entertaining and provoking the recipient, this is simply satire, whereas similar content aimed at deceiving or strongly swaying the receiver can be disinformation.Footnote 28 These similarities can render the distinction between them opaque, particularly in a digital context where content is rapidly disseminated and often consumed without much scrutiny by the average user of social media.Footnote 29

Similar difficulties can face moderation systems, both algorithmic systems, often based on AI or machine-learning, and human-based moderation. The sheer volume of content requires VLOPs to use automatic detection and sometimes removal of certain content, but humour or irony is hard to program into such systems, which can have trouble identifying, for instance, satire.Footnote 30 Here, human moderators have an advantage, since humans are generally better at understanding intent, nuance, irony, and contextual factors such as cultural jargons and expressions.Footnote 31 Algorithmic moderation, on the other hand, can handle volume and notice patterns, like disinformation campaigns, in a way impossible for humans. It can also help spare humans of the emotional impact of content moderation.Footnote 32 According to the DSA Transparency Centre, 52% of all decisions on VLOPs were fully automated during the first half of 2025.Footnote 33 Ensuring a safe space online requires that both human and algorithmic systems can distinguish protected satire and parody from disinformation presented under the guise of humour or satire. Mislabelling content can lead to both undue restrictions of expressions online and the fuelling of harmful disinformation campaigns.

2. Moderating harmful content and actions under the DSA

Under the DSA, disinformation is not defined. However, given that the Code is now a formal Code of Conduct under the DSA – and therefore plays a significant role in determining DSA compliance – its definition provides useful insight into how disinformation is to be understood under the DSA.Footnote 34 As mentioned in the introductory background, the Code explicitly excludes satire from the definition of disinformation, as does the Commission’s Communication on tackling online disinformation. Apart from the clear exclusion of satire, the Code provides a broad definition of disinformation, and it is defined as spreading false or misleading information in order to cause harm. Misinformation is also included here, and it is differentiated from disinformation by intent. When people spread false information in good faith, it is considered misinformation; when there is a deceitful intent, it is classified as disinformation. Both mis- and disinformation fall within the scope of the Code. Disinformation is also understood as more severe acts with a certain specific goal, often some form of political gain, by actively deceiving the public. This could be done in forms of disinformation campaigns, both foreign and domestic, as well as other efforts to influence or disrupt the free political will of individuals by foreign state actors.Footnote 35

Under the DSA, disinformation is framed as harmful but not necessarily illegal. Satire is not mentioned at all. The Commission has an objective not to censor content or decide that platforms should ban any legal expressions from their platforms by using terms and conditions. Doing that would exceed the scope of the Commission’s legal authority under EU law.Footnote 36 Hence, being watchful about imposing obligations that could target content like satire or parody is therefore understandable and reasonable. However, the DSA imposes a firm risk mitigation regime to manage risks on platforms, and disinformation can be such a risk.

In the DSA, disinformation is mentioned as a potential risk throughout, a societal risk that can cause societal harm.Footnote 37 Under Articles 34 and 35 in the DSA, VLOPs must conduct risk assessments and mitigation actions, to counter risks stemming from their platform or the use of the platform. The risk assessment regime under the DSA is very broad, and systemic risks relating to – for example – the negative effects on fundamental rights, and “actual or foreseeable” negative impacts on civic discourse, electoral processes, and public security, are all included in Article 34. Hence, VLOPs will have to conduct systemic risk assessments in a wide range of situations related to different types of contexts.

Throughout the DSA, a focus distinct from societal risks is also prominent, namely, the protection of individual fundamental rights. For instance, the importance of not infringing on freedom of expression is mentioned numerous times.Footnote 38 The extent of this obligation is not yet clear, nor is it certain whether this means that VLOPs must factor in the protection of their users’ fundamental rights alongside those of the surrounding society. This would require assessing, for example, whether their service amplifies content that could negatively affect fundamental rights in general.Footnote 39 These demands are also highlighted in regard to Article 35, and any mitigating actions under that Article should be tailored to risks identified under Article 34, and particular care must be paid to whether and how that action impacts fundamental rights.

The mitigating measures listed in Article 35 contain a variety of actions, including adapting content moderation processes, and VLOPs must make sure they have a well-functioning moderation system, with enough allocated resources.Footnote 40 As of now, the best clarification of the weight of the demands for enough resources can be found in inquiries from the Commission. The Commission launched a formal investigation against the social media platform “X” for not managing risks relating to illegal and harmful content on the platform, like curbing disinformation, in 2023. The outcome is still pending, leaving unanswered what the real consequences are for failing to adhere to these demands, but central to the inquiry is whether X’s risk mitigation is genuinely effective – for example, whether it allocates enough human and technical moderation resources and whether its moderation systems are robust enough to meet regulatory expectations in order to counter risks.Footnote 41

As mentioned, a great deal of the risk assessment and mitigation processes remain indeterminate and are yet to be clarified, which results from much in the DSA being handled in dialogue with VLOPs and through voluntary actions, like codes of conduct, with formal enforcement by the Commission as a last resort.Footnote 42

Notwithstanding the challenges inherent in the DSA’s risk assessment regime, its broad, yet somewhat vague, focus on systemic risks is designed to ensure the regulation’s longevity by avoiding dependence on specific types of content or technology. The assessment under the DSA concerns not the content or individual expressions themselves, but rather the potential harm they may pose to democratic values and society. Be that as it may, it is worth noting that some aspects relating to specific types of disinformative or satirical content – such as the mentioned AI generated deepfakes – are normally regulated under the AI Act and not the DSA. The obligation to label AI generated or manipulated content, like deepfakes, to avoid the risk of it being misinterpreted as authentic, is, as mentioned, regulated in the AI Act. The DSA does not specifically reference deepfakes at all. For the purpose of this article, however, the intertwining regulation in the DSA and AI Act becomes apparent. The specific risks of deepfakes relating to the content moderation process of VLOPs – making moderation decisions on manipulated content – are in fact still placed under the risk assessment regime of the DSA. This responsibility is further clarified by the fact that under Commitment 14 of the Code, it is stated that Signatories to the Code are expected to address and review various harmful actions and behaviours, such as malicious deepfakes.

There are, however, important links between the transparency requirement for generative AI (including deepfakes) in the AI Act and the demands for risk assessment and mitigation for VLOPs under the DSA. The demands to label content, such as deepfakes, and how that relates to the DSA, are highlighted in recital 136 of the AI Act, stating that:

The obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards the obligations of providers of very large online platforms or very large online search engines to identify and mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, in particular the risk of the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation.

Taken together, this shows that the demands on VLOPs to assess and, if necessary, mitigate risks of disinformative political content, including deepfakes, are part of the obligations under Article 34 and 35 in the DSA. This indicates that VLOPs and their moderation systems should – or indeed must – distinguish between, on the one hand, satire and parody, which are entirely excluded from the scope of disinformation, and, on the other, harmful disinformation that may also take satirical or humorous forms but has the potential to cause societal harm.

When discussing the DSA, previous research has also highlighted the need for regulation on disinformation that is fine-tuned enough to target the societal risks of disinformation while leaving room for satire protected under freedom of expression.Footnote 43 The fact that this potentially difficult task is not addressed in the DSA nor the Code could mean that this is not perceived as a situation that would ever occur or that would ever be difficult for VLOPs and their content moderation systems to handle, and therefore is not worth discussing. It could also be an oversight in the Code and the Communication from the European Commission. In light of the current development, where disinformative memes, deepfakes and other forms of humour or satire can easily be mistaken for facts, all forms of satire might not be possible to simply exclude if VLOPs are also to diligently assess and mitigate risks. Or, at the very least, the moderation systems of VLOPs must be fine-tuned enough to classify speech as political satire – not a small task in itself, considering that content moderation in general is particularly challenging around elections when platforms are flooded with content.Footnote 44

In understanding this, the case law from the ECtHR on the protection of satire under Article 10 of the ECHR might provide normative guidance into how to understand and identify satire, and how to differentiate that from disinformation.

III. Understanding the protection of political satire under the ECHR

1. Protecting (dis)informative expressions in political contexts?

Understanding how satire relates to disinformation first raises the question of whether the ECtHR has addressed that issue directly. It has not. However, the Court has decided cases relating to false, exaggerated or simply disinformative expressions in political contexts over the years, more or less directly. From the Court’s case law, it is apparent that the context, the political sphere, is of great importance when understanding its protection under Article 10. This is a context where the Court has stated that even in situations where untrue or unsubstantiated claims have been made, restrictions on them could be in violation of Article 10, something that can be exemplified by Kwiecień v Poland and Kita v Poland.Footnote 45 In both cases, the Court placed emphasis on the fact that the intent of the applicants was not malicious. Even if the statements made in these cases were untrue, the aim was to shed light on specific aspects of either misconduct or suitability to hold public office. The fact that domestic courts deemed the statements untrue without looking into the motive of the applicants made both cases a violation of Article 10.Footnote 46 As will be shown, these aspects will prove relevant in the Court’s case law on satire as well.

Another similar example is Brzeziński v. Poland. Under Polish election law, a court may decide – within 24 hours – whether a message constitutes disinformation and, if deemed false, prohibit its further dissemination. Brzeziński, a political candidate, released a booklet as part of his political campaign. The domestic court immediately deemed some statements in the booklet false and demanded that the applicant publish corrections in two local newspapers, and he was also fined. This was the first (and only) time that the ECtHR mentioned, the not uncontroversial term, “fake news” in their case law,Footnote 47 and the Court highlighted the urgent nature of mitigating the effects of false statements damaging the reputation of election candidates.Footnote 48 Nevertheless, the Court at the same time stated that exaggerated and provocative statements were not out of the ordinary for local political debates, and that the interference with freedom of expression under Article 10 was disproportionate.Footnote 49 Here, one can also be reminded of the more general understanding of the Court that expressions made by politicians (and journalists) are especially protected in relation to their contributions to the public debate.Footnote 50

To contrast, Sanchez v. France Footnote 51 is a good example of liability for expressions made (or allowed) in a political or electoral context. This case is not best known for its connection to disinformation, but for marking the first time that placing criminal liability on a Facebook user for third-party comments was found not to constitute a violation of article 10.Footnote 52 In Sanchez a politician had not deleted Islamophobic comments from his Facebook wall, and, for this, he was fined. The Court stated that:

[…] to exempt producers from all liability might facilitate or encourage abuse and misuse, including hate speech and calls to violence, but also manipulation, lies and disinformation. In the Court’s view […] there should be a sharing of liability between all the actors involved […]. Footnote 53

Here the Court highlights the importance of not exempting producers from accountability, in order to avoid misuse, such as not only hate speech or incitement to violence, but also other harmful actions, like disinformation. Generally, the Court is adamant on allowing political expressions and debate on matters of public interest so as not to cause a chilling effect.Footnote 54 But again Sanchez v. France is interesting since it shows a delineation of the protection of the “political sphere” as the political component did not prevent imposing liability for allowing hateful speech on the politician’s Facebook page.Footnote 55 Also worth mentioning is that other types of false and hateful expressions, like holocaust denial, can be completely excluded from the protection of Article 10, through Article 17, and its prohibition of actions aimed at the destruction of any of the rights and freedoms in the ECHR.Footnote 56 Disinformation that is hateful or discriminatory in nature can therefore, more easily, be restricted.Footnote 57

But what about political or electoral content that is not per se hateful or inciting violence, but still false and potentially misleading? This is something that has been touched upon by the Court, especially in a pre-election context. Here, the case of Staniszewski v. Poland Footnote 58 brings us back to the relevant aspects of intent and good faith. In this case, a journalist claimed that the local mayor had only chosen a certain village for a festival in order to gain support in an upcoming election. The applicant had not contested that the statements were not true and had not provided any evidence for a factual basis other than that they came from the public domain or public sources. Here the Court expressed the importance of free elections and sharing different information and opinions, but also went on to state that:

At the same time the Court recognises the importance of protecting the integrity of the electoral process from false information that affect voting results, and the need to put in place the procedures to effectively protect the reputation of candidates Footnote 59

It is particularly interesting that the Court found no violation in this case, given that the applicant was a journalist and therefore enjoys greater protection under Article 10. Here the Court highlighted that the politician must endure scrutiny as part of his role, and the protection of journalistic coverage of politics and politicians. But the journalist in question had not acted in good faith and done his due diligence to make sure he presented correct information.Footnote 60 Therefore, no violation of Article 10 was found.

This brief overview of the ECtHR’s case law shows that disinformative content can be addressed and restricted without infringing the protection provided under Article 10. In other words, the case law does not, in itself, conflict with the DSA’s aim of countering disinformation. At the same time, the Court recognises that, in electoral and political contexts, even false statements can have value, which strengthens the protection afforded to such expressions. The intent and nature of the statements must therefore be considered. This raises the question: how should satirical political content be treated in this context? Understanding that requires clarifying the ECHR’s reasoning regarding the protection of satire.

2. Clear and intentional satire?

In the landmark case Vereinigung Bildender Künstler v. Austria in 2007 satire was for the first time specifically defined in the Courts case law. The case concerned an art exhibition portraying public figures in different sexual contexts, and with four votes to three, the Court found that the injunction and the protection of personal rights constituted a disproportionate interference with the right to freedom of expression in a democratic society, given that the paintings were satirical in nature and contributed to public discourse. The Court specified here that satire is part of artistic expressions under Article 10:

[…] satire is a form of artistic expression and social commentary and, by its inherent features of exaggeration and distortion of reality, naturally aims to provoke and agitate. Accordingly, any interference with an artist’s right to such expression must be examined with particular care. Footnote 61

In a previous case from the ECtHR, Alinak v. Turkey in 2005, the Court clarified that artistic expression is included in the protection of Article 10, stating that art is part of the right to exchange cultural, social and political ideas and information of all kinds, and must be proportionately met by signatory States.Footnote 62 However, artistic expressions in general does not warrant the same scrutiny as political expression discussed above,Footnote 63 but artistic satirical expressions, by contrast, will require a more careful balancing in line with the case Vereinigung Bildender Künstler v. Austria.

The stronger protection of satire can be exemplified with Yevstifeyev and Others v. Russia. The case concerned a video portraying gay adopting parents as a response to an anti-gay video posted by the pro-government media website FAN. The video posted by FAN was part of a political campaign to urge voters to vote for amendments to the constitution. The Court stated that the video in question before the Court was clearly a parody of and response to this anti-gay video by FAN, and that the purpose was not to propagate against homosexuality, but to mock the original video. The Court also emphasized that “[…] the contested video could be seen as a contribution to a political debate about the proposed constitutional amendments, expressed in a satirical form.”Footnote 64 Here, the intent is undoubtedly a large part of the Courts reasoning. The intention was to mock and bring attention to the matter at large, not to spread hate. This mirrors similar reasoning as in both Kwiecień v Poland and Kita v Poland discussed above, where the intention of the expressions – to highlight issues in relation to elections or politicians – was important for the protection under Article 10, more so than the factual basis. The Court also takes into account the context of the expression, the content, and tone of the message, described as humorous. Taken together, the Court could not perceive the video as actually hateful. The intent behind the video and contextual factors therefore suggested that it “clearly” was parody.Footnote 65

Similar reasoning can be found in Vereinigung Bildender Künstler v. Austria. The depiction of thirty-four public figures in sexual positions and situations was described by the Court as “unrealistic and exaggerated,” and the painting of the applicant, one of the thirty-four depicted, “obviously did not aim to reflect or even to suggest reality.” This also led to the conclusion that the portrayal could not be seen as commenting the applicant’s personal life, but his role as a politician.Footnote 66

This shows that the Court emphasises both the intent of the message, but also the tone, context and content itself, and whether it is obviously exaggeration and parody, not to be taken literally. Two sides of the message – intention and perception – are therefore important in understanding the Court’s reasoning.

To develop this further, the case of Nikowitz and Verlagsgruppe News GmbH v. Austria should be mentioned. In this case, the publication of a satirical article about an injured athlete had been ruled as defamation by the Austrian courts, with a penalty. This was viewed by the Court as an infringement of the freedom of expression under Article 10. The interesting aspect of this case, in this context, is that is that the Court discusses the understanding of the readers. The Court reacted to the domestic court’s suggestion that an unfocused reader might not understand that the message was satirical, and expressed that:

The article, as was already evident from its headings and the caption next to [the athlete’s] photograph, was written in an ironic and satirical style and meant as a humorous commentary. Nevertheless, it sought to make a critical contribution to an issue of general interest, namely society’s attitude towards a sports star. The Court is not convinced by the reasoning of the domestic courts and the Government that the average reader would be unable to grasp the text’s satirical character and, in particular, the humorous element of the impugned passage about what [the athlete] could have said but did not actually say.

This indicates that the Court trusts even a slightly “unfocused” reader’s ability to distinguish satire from fact.

It has already been demonstrated in the discussion above that political satire has a strong protection. Going back to Yevstifeyev and Others v. Russia the Court also highlighted that satirical forms of expression are a crucial part of a democratic society when relating to topical issues, as part of an open public debate.Footnote 67 This is also reiterated in Alaves da Silva v. Portugal where a politician was portrayed as corrupt in a puppet show during a festival. The Court here again stated that it was clear from the context that this was satire, and even if the politician himself had taken it literally, he must tolerate criticism in his position as a politician.Footnote 68 This is interesting in relation to the case law presented above, showing that politicians’ expressions have stronger protection under Article 10, and that this stronger protection is, in a way, a double-edged sword. Their political expressions receive strong protection, but they are also supposed to tolerate more criticism. This plays into the field of satire, where politicians are expected to endure criticism, especially in the form of mockery, parody and satire.Footnote 69 In Yevstifeyev and Others v. Russia the ECtHR further highlighted that […] in the context of an election campaign, a certain vivacity of comment may be tolerated more than in other circumstances” with reference to the Sanchez case, and others.Footnote 70 This shows the high protection for political satire and its role as part of the public political debate.

The Sanchez case is relevant for other reasons as well in understanding satire, even if it does not in itself deal with satire. In setting the limits of the protection of political speech or the political sphere, the Court notes that hateful comments can become more impactful and harmful in a political context.Footnote 71 The Court has not ruled in a case that combines overtly hateful comments with satire, but in light of other case law, it is reasonable to assume that hateful or violent content would, at the very least, diminish the level of protection afforded to expressions, even in a political context.

This conclusion is further strengthened by case law on humorous content under the ECHR, where, in the case of Z.B. v. France, the Court ruled that there was no violation in convicting the applicant for putting a t-shirt on his three-year-old nephew with terrorism slogans on it, to wear to nursery school. The applicant said it was done in a humorous manner, but the Court argued that preventing the “glorification of mass violence” was a legitimate aim under Article 10.Footnote 72

IV. Discussion and conclusions

1. Moderating risk, context and intent

The aim of this article has been to shed light on the complexity of political satire in relation to disinformation, and how to perform sufficient content moderation on content that might fall both under the obligation to moderate risks under the DSA, and the protection of freedom of expression in the ECHR.

The case law from the ECtHR has provided insights into the scope and contours of protected political satire under the ECHR, the most significant being that the Court’s jurisprudence affords stronger protection to political satire (and political expression in general) than to purely artistic expression. The importance of a free and open democratic debate on matters important to the general public allows for provocative, critical and humorous expressions in various forms. Under that premise, it is reasonable to exclude satire from the definition of disinformation under the Code, the DSA and other emerging tools for content moderation. On the other hand, under the moderation obligations in the DSA, the distinction between political satire and disinformation might be harder to make, as the risk of disinformation is most prominent in political contexts – a context especially protected under the ECHR. This illustrates that the DSA and the ECHR function under fundamentally different parameters. Where the ECtHR can and must make nuanced and balanced decisions in individual cases, enabling discussions on the intent and purpose of expressions in relation to a certain person, group or action, content moderation systems must instead relate their moderation practices to risks stemming from their platform. This could also be one explanation for the scope and definition of disinformation in the Code, where spreading false or misleading information regardless of intent is included, since the effects could still be harmful.Footnote 73 Like the DSA, that directs the signatories of the Code to assess potentially harmful misleading content rather than the intent behind it. In light of this study, however, a more nuanced moderation decision, where intent is a factor, could be relevant in relation to upholding the freedom of expression of users.

That said, before the matter of intent becomes relevant identifying situations where satire could or should be addressed by moderation systems is a first step. Here, the study of the ECtHR’s case law provides some, albeit not detailed, guidance for moderation systems on which aspects may render interferences with satire acceptable, and sometimes necessary. In Sanchez, for example, the context and hateful nature of the statements made, made restricting them proportionate, and Z.B. v. France showed that even when done with a humorous intention, glorifying terrorism or violence was a type of humorous expression that could be limited. Reasonably, this could also translate to satirical expressions in a political context, providing some guidance for VLOPs when instructing their moderation systems. In such cases, the potential harm deriving from satirical hateful or violent expressions could indicate a need for some kind of moderation action to mitigate the risk of the message.

Going back to the matter of intent, in light of the Court’s case law, this could also be factored into moderation decisions. If a clearly malicious purpose is found behind a disinformative, but also satirical, message, the Court’s case law indicates that such content could more easily be targeted. Its value in a democratic society will decrease if the aim is to spread falsehoods and intentionally mislead. However, clarifying the underlying intention of the creator or “re-poster” of an individual meme is highly challenging for content moderation systems to do. Even for the ECtHR, intent is a difficult aspect to factor into decisions. This despite the fact that the Court only decides on specific cases, bounded in time, space and context, compared to content moderation systems which are faced with uncertainty as to who originally posted a certain message, and what their purpose was. The context in which the message is posted can also change over time, and might be modified, built on, challenged or praised by numerous users along its journey through the internet. Therefore, the spread and reach of the content – and the risk of it being part of a disinformation campaign – will perhaps be more significant than intent when performing risk assessment.

The fact that platforms must follow risk-based regulations is a regulatory turn in targeting the digital environment. This is a way to balance the responsibility, placing more far-reaching responsibilities on big platforms with more resources to mitigate larger risks associated with the spread and reach of content.Footnote 74 It is therefore not surprising that it is not entirely compatible with the individual rights-based approach that characterises the case law of the ECtHR. Given the difficulties even the Court faces in determining intent, some scholars have suggested that the risk of harm should take precedence over intent as a guiding criterion even for the ECtHR, as harm can be considered a less subjective standard in cases involving satire.Footnote 75

In addition to malicious intent and hateful content being less problematic to target by moderation systems seeking to mitigate risks, another finding of this study is the emphasis put on the importance of something being recognisably satirical. The Court has repeatedly stressed that the satirical nature of the content in the cases before it is “clearly” apparent. As mentioned, in Nikowitz and Verlagsgruppe News GmbH v. Austria for instance, the Court described readers as being able to recognise the satire in the article, contradicting the domestic court’s view that an unfocused reader might fail to perceive it. It might be an unfair comparison to relate this back to the online environment and the average social media user – where satire and parody are not infrequently perceived as real –Footnote 76 but it is still interesting to note that the Court sets a rather high bar, expecting an average person to understand the underlying irony or satire. On social media, content such as memes can surface in different contexts and acquire different meanings, which is a far less fixed environment than that of news media. At the same time, content on these platforms can spread rapidly and be consumed only briefly by scrolling users, raising the risk of misleading part of the users. If so, the risk of causing harm – for instance misleading the public in an election context – could also increase. Whether, and how, user understanding and perception of disinformative satirical content in relation to the potential risk of it being widely disseminated, for example, is a factor that VLOPs are required to assess remains an open question.

That being said, some, perhaps even most, political satire can probably still be perceived as obvious parody by a majority of social media users. Advancements in deepfake technology are an example of this, and have been described as posing […] an existential threat to the integrity of information and trust in public discourse” and that ”[…] these technologies were not only creating realistic ‘deepfakes’ but were also being actively used to disseminate misinformation, extort victims, and access classified information, making it increasingly challenging for both humans and AI to identify and verify information.”Footnote 77 Even though it is likely that moderation systems will also become more fine-tuned and advanced, this highlights the importance of robust moderation tools to keep up with technological advances.

Finally, it is worth noting that risks under the DSA may arise in different ways. On the one hand, satire may be misclassified as disinformation and subjected to moderation measures, thereby infringing freedom of expression. On the other hand, platforms – fearful of over-moderation – may allow harmful disinformation disguised as satire or parody to remain online, potentially fuelling disinformation campaigns. This underscores the need for VLOPs to allocate sufficient moderation resources so that both automatic and human moderation can detect, identify and, most importantly, make nuanced moderation decisions between satire and disinformation. The role of human moderators should not be underestimated, as they generally are better equipped than automated systems, to interpret intent, contextual factors and how content will be perceived by users.Footnote 78

2. Conclusion

As previously shown, in the social media landscape today satire is merging with disinformation, not least around elections, to spread malicious disinformation disguised as parody, humour or satire. In a society marked by rapidly developing technologies and a highly charged political climate, this creates a risk for a “satire loophole” where manipulative content can hide behind jokes, while legitimate satire risk being moderated and censored under the moderation regimes of VLOPs. To so categorically exclude satire and parody from the understanding of disinformation under the emerging content moderation framework in the DSA and the Code could therefore be too simplistic an approach. The integrity of democratic discourse depends on content moderation practices that are both sophisticated and context-sensitive. This includes recognising satire’s democratic value while also acknowledging its potential misuse. Therefore, this Article argues that while protection of satire and parody is essential in a democratic society, the use of it for malicious purposes calls for regulatory clarifications on how to conduct efficient content moderation to avoid over-moderation and a chilling effect on provocative and satirical expressions, while still identifying and mitigating risks under the risk assessment regime.

Financial support

The author declares this research was funded by the Swedish Research Council, grant numbers 2020-02278 and 2022-05414.

Competing interests

The author has no conflicts of interest to declare.

References

1 Very Large Online Platforms are here defined in accordance with Art. 33 (1) of the DSA, as online platforms with more than 45 million users.

2 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) 2022 (OJ L) recital 9, Art. 34, Art. 35; G Frosio and C Geiger, “Taking Fundamental Rights Seriously in the Digital Services Act’s Platform Liability Regime” (2023) 29 European Law Journal 31.

3 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) recitals, 2, 3, 9.

4 Ibid, see recitals 9, 69, 83 and 84. MR Leiser, “Reimagining Digital Governance: The EU’s Digital Service Act and the Fight Against Disinformation” (24 April 2023) <https://papers.ssrn.com/abstract=4427493> accessed 31 December 2023.

5 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions; Tackling online disinformation: a European Approach 2018.

6 Handyside v the United Kingdom [1976] European Court of Human Rights 57499/17.

7 S Galantino, “How Will the EU Digital Services Act Affect the Regulation of Disinformation?” (2023) 20 SCRIPTed 89; Leiser (n 4).

8 European Commission, “The Strengthened Code of Practice on Disinformation 2022” (European Commission 2022) <https://digital-strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation>.

9 R Jahangir, “The EU’s Code of Practice on Disinformation Is Now Part of the Digital Services Act. What Does It Mean? | TechPolicy. Press” (Tech Policy Press, 25 February 2025) <https://techpolicy.press/the-eus-code-of-practice-on-disinformation-is-now-part-of-the-digital-services-act-what-does-it-mean> accessed 7 March 2025.

10 European Commission (n 8) p 1 (footnote 5).

11 NT Lee and IP Hernández, “AI Memes: Election Disinformation Manifested through Satire” (Brookings, 3 October 2024) <https://www.brookings.edu/articles/ai-memes-election-disinformation-manifested-through-satire/> accessed 12 June 2025; LG Reed, “‘Taking Jokes Seriously’: Establishing a Normative Place for Satire within the Freedom of Expression Analysis of the European Court of Human Rights” (2022) 11 Journal of Law and Jurisprudence <https://student-journals.ucl.ac.uk/laj/article/id/1355/> accessed 13 June 2025; A Ray, “Disinformation, Deepfakes and Democracies: The Need for Legislative Reform” (2021) 44 University of New South Wales Law Journal. <https://www.unswlawjournal.unsw.edu.au/article/disinformation-deepfakes-and-democracies-the-need-for-legislative-reform/> accessed 24 June 2025.

12 T McGonagle and K Pentney, “From Risk to Reward? The DSA’s Risk- Based Approach to Disinformation” Unravelling the Digital Services Act package (European Audiovisual Observatory 2021); See P Leerssen, “An End to Shadow Banning? Transparency Rights in the Digital Services Act between Content Moderation and Curation” (2023) 48 Computer Law & Security Review 105790; RÓ Fathaigh, N Helberger and N Appelman, “The Perils of Legally Defining Disinformation” (2021) 10 Internet Policy Review <https://policyreview.info/articles/analysis/perils-legally-defining-disinformation> accessed 30 November 2023.

13 JM Smits, The Mind and Method of the Legal Academic (Edward Elgar Publishing 2012) <https://www.e-elgar.com/shop/gbp/the-mind-and-method-of-the-legal-academic-9780857936547.html> accessed 4 March 2024; I McLeod, Legal Method (Bloomsbury Publishing 2020).

14 CH Basch and others, “A Global Pandemic in the Time of Viral Memes: COVID-19 Vaccine Misinformation and Disinformation on TikTok” (2021) 17 Human Vaccines & Immunotherapeutics 2373.

15 P Suciu, “Social Media Memes Could Sway Voters in the Presidential Election” Forbes (26 September 2024) <https://www.forbes.com/sites/petersuciu/2024/09/26/social-media-memes-could-sway-voters-in-the-presidential-election/> accessed 12 June 2025.

16 Nicol Turner Lee and Isabella Panico Hernández (n 11).

17 A Raza, “The Influence of Political Satire on Voter Behavior: A Study of Digital Media’s Impact” (2024) 5 Global Media and Social Sciences Research Journal 49 p. 51.

18 A Williams and M Dupuis, “I Don’t Always Spread Disinformation on the Web, but When I Do I Like to Use Memes: An Examination of Memes in the Spread of Disinformation”.

19 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) 2024.

20 Art. 3 (60) and Recital 134, AI Act.

21 R Chesney and DK Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” (2019) California Law Review 1753 <https://www.ssrn.com/abstract=3213954> accessed 24 June 2025; M Hameleers, TGLA van der Meer and T Dobber, “They Would Never Say Anything Like This! Reasons To Doubt Political Deepfakes” (2024) 39 European Journal of Communication 56.

22 Ray (n 11) pp. 985–86.

23 “Deployer” as defined in Art. 3 (4) and recital 13 of the AI Act, is natural or legal persons using an AI system (except for strictly personal reasons).

24 As an example from an American context, the spreading of deepfakes of Kamala Harris by Elon Musk ahead of the American election in 2024 can be mentioned: Scott Rosenberg, “Deepfakes’ Parody Loophole” (Axios, 30 July 2024) <https://www.axios.com/2024/07/30/ai-deepfake-parody-musk-first-amendment> accessed 26 June 2025.

25 FR Moreno, “Generative AI and Deepfakes: A Human Rights Approach to Tackling Harmful Content” (2024) 38 International Review of Law, Computers & Technology 297; M Łabuz, “Regulating Deep Fakes in the Artificial Intelligence Act” (2023) 2 Applied Cybersecurity & Internet Governance 1.

26 C Vanberghen, “The AI Act vs. Deepfakes: A Step Forward, but Is It Enough?” (Euractiv, 26 February 2024) <https://www.euractiv.com/section/artificial-intelligence/opinion/the-ai-act-vs-deepfakes-a-step-forward-but-is-it-enough/> accessed 6 March 2025.

27 European Commission (n 8); Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions; Tackling online disinformation: a European Approach (n 5); A French, VC Storey and L Wallace, “A Typology of Disinformation Intentionality and Impact” (2023) 2023 Information Systems Journal <https://onlinelibrary.wiley.com/doi/abs/10.1111/isj.12495> accessed 22 February 2024.

28 D Das and AJ Clark, “Satire vs Fake News: You Can Tell by the Way They Say It” (2019) <https://ieeexplore.ieee.org/document/8940415> accessed 11 June 2025.

29 A Chopra, M Kulundu and S Salem, “It’s No Joke: Across Globe, Satire Morphs into Misinformation” (ABS/CBN, 15 December 2022) <https://www.abs-cbn.com/spotlight/12/15/22/its-no-joke-satire-morphs-into-misinformation> accessed 12 June 2025.

30 EV Penagos, “ChatGPT, Can You Solve the Content Moderation Dilemma?” (2024) 32 International Journal of Law and Information Technology eaae028 p. 26.

31 J Cobbe, “Algorithmic Censorship by Social Platforms: Power and Resistance” (2021) 34 Philosophy & Technology 739; TD Oliva, DM Antonialli and A Gomes, “Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online” (2021) 25 Sexuality & Culture 700; European Union Agency for Fundamental Rights (FRA), ‘Online Content Moderation Current Challenges in Detecting Hate Speech’ (2023).

32 ST Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press 2019).

33 That number was based on numbers provided by the Transparency Centre in June 2025: ‘Home – DSA Transparency Database’ <https://transparency.dsa.ec.europa.eu/> accessed 11 June 2025.

34 “Commission Endorses the Integration of the Voluntary Code of Practice on Disinformation into the Digital Services Act | Shaping Europe’s Digital Future” <https://digital-strategy.ec.europa.eu/en/news/commission-endorses-integration-voluntary-code-practice-disinformation-digital-services-act> accessed 19 June 2025.

35 European Commission (n 8), 1 (a) with footnotes.

36 M Husovec, “The Digital Services Act’s Red Line: What the Commission Can and Cannot Do about Disinformation” (2024) 16 Journal of Media Law 47 p. 55.

37 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) see recitals 2, 9, 69, 83, 88, 95, 104.

38 Ibid, see for instance recitals 3, 22, and 51–54.

39 N Eder, “Making Systemic Risk Assessments Work: How the DSA Creates a Virtuous Loop to Address the Societal Harms of Content Moderation” (2024) 25 German Law Journal 1197, p. 1206.

40 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), Art. 34.2 (b), 35.1 (c), recitals 87 and 96. See also; “Commission Requests Information from X on Decreasing Content Moderation Resources under the Digital Services Act | Shaping Europe’s Digital Future” <https://digital-strategy.ec.europa.eu/en/news/commission-requests-information-x-decreasing-content-moderation-resources-under-digital-services> accessed 27 June 2025.

41 “Commission Opens Formal Proceedings against X under the DSA” (European Commission – European Commission) <https://ec.europa.eu/commission/presscorner/detail/en/IP_23_6709> accessed 12 August 2025.

42 R Griffin, “Governing Platforms through Corporate Risk Management: The Politics of Systemic Risk in the Digital Services Act” (2025) European Law Open p. 9.

43 A Strowel and J De Meyere, “The Digital Services Act: Transparency as an Efficient Tool to Curb the Spread of Disinformation on Online Platforms?” (2023) 14 JIPITEC – Journal of Intellectual Property, Information Technology and E-Commerce Law 66 p. 70.

44 L Nikiforov, “Transparency in Targeting of Political Advertising: Challenges Remain” (Social Science Research Network, 1 November 2024) <https://papers.ssrn.com/abstract=5054430> accessed 27 February 2025.

45 See Kwiecień v Poland [2007] European Court of Human Rights 51744/99 and; Kita v Poland [2008] European Court of Human Rights 57659/00.

46 This is discussed also in E Shattock, “Fake News in Strasbourg: Electoral Disinformation and Freedom of Expression in the European Court of Human Rights (ECtHR)” (2022) 13 European Journal of Law and Technology 1.

47 RÓ Fathaigh, “Brzeziński v. Poland: Fine over ‘False’ Information during Election Campaign Violated Article 10” (Strasbourg Observers, 8 August 2019) <https://strasbourgobservers.com/2019/08/08/brzezinski-v-poland-fine-over-false-information-during-election-campaign-violated-article-10/> accessed 25 June 2025; Shattock (n 46).

48 Brzeziński c Pologne [2019] European Court of Human Rights 47542/07 para. 35.

49 Ibid, para. 59, 63.

50 See, for instance, Lombardo and Others v Malta [2007] European Cort of Human Rights 7333/06.

51 Sanchez v France [2023] European Court of Human Rights Application no. 45581/15.

52 A Carlsson, Constitutional Protection of Freedom of Expression in the Age of Social Media: A Comparative Study (Department of Law, Uppsala University 2024) p 217.

53 Sanchez v France (n 36) para. 185.

54 Lindon, Otchakovsky-Laurens and July v France [2007] European Court of Human Rights 21279/02, 36448/02; D Harris and others, Harris, O’Boyle, and Warbrick: Law of the European Convention on Human Rights, vol 2023 (5th edn, Oxford University Press) <https://www.adlibris.com/se/bok/harris-oboyle-and-warbrick-law-of-the-european-convention-on-human-rights-9780198862000> accessed 19 June 2025 p 617.

55 J Jahn, “Strong on Hate Speech, Too Strict on Political Debate” (2023) Verfassungsblog <https://verfassungsblog.de/strong-on-hate-speech-too-strict-on-political-debate/> accessed 2 July 2024.

56 . See for example, Garaudy v France (dec) [2003] European Court of Human Rights Application no. 65831/01; Witzsch v Germany [2005] European Court of Human Rights Application no. 7485/03; M’bala M’bala v France (dec) [2015] European Cort on Human Rights Application no. 25239/13; Williamson v Germany (dec) [2019] European Court of Human Rights Application no. 64496/17.

57 This is also a conclusion in Shattock (n 46).

58 Staniszewski v Poland [2021] European Court of Human Rights 20422/15.

59 Ibid, para. 47.

60 Ibid, paras. 48, 51.

61 Vereinigung Bildender Künstler v Austria [2007] ECtHR 68354/01 para. 33.

62 Alinak v Turkey [2005] ECtHR 40287/98 para. 42.

63 D Harris and others (n 54) p 620.

64 Yevstifeyev and Others v Russia [2024] ECtHR 226/18, 236/18, 2027/18 para. 56.

65 Ibid, para. 55.

66 Vereinigung Bildender Künstler v. Austria (n 61) paras. 33–4.

67 Ibid, para. 57; See also Eon v France [2013] European Court of Human Rights 26118/10 para. 61.

68 Alves Da Silva c Portugal [2009] European Court of Human Rights 41665/07 para. 28.

69 Vereinigung Bildender Künstler v. Austria (n 49) para. 34, see also; Alves Da Silva c. Portugal (n 55) para. 28.

70 Ibid, para. 57; See also Sanchez v France (n 25) para. 152; Desjardin v France [2007] European Court of Human Rights Application no. 22567/03 para. 48; Brasilier v France [2006] European Court of Human Rights Application no. 71343/01 para. 42.

71 Ibid, para. 153.

72 ZB v France [2021] European Cort of Human Rights 46883/15.

73 European Commission (n 8), 1 (a) with footnotes.

74 G De Gregorio and P Dunn, “The European Risk-Based Approaches: Connecting Constitutional Dots in the Digital Age” (2022) 59 Common Market Law Review <https://kluwerlawonline.com/api/Product/CitationPDFURL?file=Journals\COLA\COLA2022032.pdf> accessed 25 April 2025; CS Emes, “Exploring New Frontiers in Digital Governance: Addressing the Ambiguities of Risk-Based Regulation Approach for Platforms” (Social Science Research Network, 30 January 2025) <https://papers.ssrn.com/abstract=5242418> accessed 22 May 2025.

75 H Hussain and S Sanghi, “Irreverence Intended? Destabilizing ‘Intent’ as Determinative in Discourse around Satire at the ECtHR” (2022) 2022 Pécs Journal of International and European Law <https://ora.ox.ac.uk/objects/uuid:9a073321-815f-4d2a-a23a-446ba63df2e5> accessed 18 June 2025.

76 Anuj Chopra, Mary Kulundu, and Saladin Salem (n 29).

77 Romero Moreno (n 25) p 298.

78 See n 31.