Hostname: page-component-857557d7f7-cmjwd Total loading time: 0 Render date: 2025-12-07T20:56:10.818Z Has data issue: false hasContentIssue false

Finlandised electobots and the distortion of collective political memory

Published online by Cambridge University Press:  04 December 2025

Nuppu-Maija Pelevina*
Affiliation:
Communication, School of Marketing and Communication, University of Vaasa, Vaasa, Finland
Tanja Sihvonen
Affiliation:
Communication, School of Marketing and Communication, University of Vaasa, Vaasa, Finland
Rebekah Rousi
Affiliation:
Communication, School of Marketing and Communication, University of Vaasa, Vaasa, Finland
Tomi Laapotti
Affiliation:
Communication, School of Marketing and Communication, University of Vaasa, Vaasa, Finland
Heidi Mikkola
Affiliation:
Communication, School of Marketing and Communication, University of Vaasa, Vaasa, Finland
*
Corresponding author: Nuppu-Maija Pelevina; Email: nuppu-maija.pelevina@uwasa.fi

Abstract

Given the potential of generative artificial intelligence (GenAI) to create human clones, it is not surprising that chatbots have been implemented in politics. In a turbulent political context, these AI-driven bots are likely to be used to spread biased information, amplify polarisation, and distort our memories. Large language models (LLMs) lack ‘political memory’ and cannot accurately process political discourses that draw from collective political memory. We refer to research concerning collective political memory and AI to present our observations of a chatbot experiment undertaken during the Presidential Elections in Finland in early 2024. This election took place at a historically crucial moment, as Finland, traditionally an advocate of neutrality and peacefulness, had become a vocal supporter of Ukraine and a new member state of NATO. Our research team developed LLM-driven chatbots for all presidential candidates, and Finnish citizens were afforded the chance to engage with these chatbot–politicians. In our study, human–chatbot discussions related to foreign and security politics were especially interesting. While rhetorically very typical and believable in light of real political speech, chatbots reorganised prevailing discourses generating responses that distorted the collective political memory. In actuality, Russia’s full-scale invasion of Ukraine had drastically changed Finland’s political positioning. Our AI-driven chatbots, or ‘electobots’, continued to promote constructive dialogue with Russia, thus earning our moniker ‘Finlandised Bots’. Our experiment highlights that training AI for political purposes requires familiarity with the prevailing discourses and attunement to the nuances of the context, showcasing the importance of studying human–machine interactions beyond the typical viewpoint of disinformation.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Introduction

Memory is a political act. Forgetfulness is the handmaiden of tyranny.

As so aptly captured in the above quote by James Carroll (Reference Carroll2004, p. 13), memory is not only a function of cognition, but one of strategy (Miskimmon et al., Reference Miskimmon, O’loughlin and Roselle2014). Memory is a political act in many senses as it reflects values, priorities, and has a tendency to manifest biases in a way that allows for convincing persuasion – operating at an individual level as well as socially through communication. Not only is memory a collection of stored information, but it is also an expression of time. In political speech, temporality is always present: issues are presented and discussed through their past, present, and future tensions. The past is specific and holds significance in various ways, from symbolising idyllic metaphors of political nostalgia and monumental historic moments (eg, fall of the Berlin Wall in 1989 marking the end of the Cold War) to symbolising the beast of no return (eg, references to the Yeltsin decade to legitimise the stability of the Putin rule in Russia, see Malinova, Reference Malinova2021). Memory is as much cognitive as it is discursive (Wodak and Richardson, Reference Wodak and Richardson2009). The present is always a pivotal moment in history, the meeting point between past and future, while the future – in the hands (or mouth) of the speaker – is one of desire, if indeed it is in the hands of the speaker.

Speech, as any form of communication, can be understood as mediated and augmented such as seen in its proliferation through media and the arts. In decades gone by, scholarly attention was focused on the role of media, the influence of rhetoric, writing, film, photography, editing, and framing (Mungiu-Pippidi, Reference Mungiu-Pippidi2008; Strömberg, Reference Strömberg2015), in political persuasion and memory. Thus, political speech carries forth through all mediation, manifesting in different forms depending on the technology through which it is projected. In today’s artificial intelligence (AI) fevered world, much attention is being placed on deepfakes, often driven by Large Language Models (LLMs, Luitse and Denkena, Reference Luitse and Denkena2021), and social media feeds through which they currently manifest. Large Language Models (LLMs) are generative AI systems trained to model grammar, syntax, and semantics of one or more languages, enabling them to produce coherent and contextually relevant text based on patterns learned from large datasets. While regarding these technologies, attention is placed on ruthless deception (eg, identity theft; Painter, Reference Painter2023), manipulation (Ienca, Reference Ienca2023), and trust among other things, aspects of collective and political memory in the intermingled relationship between politics and AI (technology) have recently gained scholarly attention (eg, Makhortykh et al., Reference Makhortykh, Sydorova, Baghumyan, Vziatysheva and Kuznetsova2024). When conceptualising political memory as a mechanism for strategic narratives (Miskimmon et al., Reference Miskimmon, O’loughlin and Roselle2014), we understand that communicative environments – nowadays inclusive of AI and its counterparts (LLMs, generative pre-trained transformers [GPTs], and generative AI [GenAI]) – are structured in a way to prioritise, promote, preserve, and drive particular scenarios that reinforce exemplary understandings of ideologies in practice. It is this aspect that we will focus on in the current article.

AI has become a must-have tool for officials and white-collar workers. Yet, due to the rapid emergence of these GenAI tools in all realms of society, concern and critique have arisen regarding the role of AI in political turmoil, environmental degradation, and social inequality. GenAI is known for its ability to generate plausible-sounding facts and references to scientific publications, people, and events that do not actually exist. One of the core issues is that present-day LLMs are structurally incapable of verifying truth or tracking the origins of information across multiple layers of text (Kansteiner, Reference Kansteiner2022). This makes them not just potential conduits for misinformation, but active participants in historical fabrication. By hallucinating memories – generating plausible but false historical narratives – LLMs extend beyond traditional digital propaganda, creating a more insidious form of historical and political manipulation (Makhortykh et al., Reference Makhortykh, Sydorova, Baghumyan, Vziatysheva and Kuznetsova2024). The LLM output that is generated today contributes not only to confusion over current occurrences, but is indeed forming political memory through both content and socio-technical sentiment of the times. That is, it is not only what the technology is saying that forms political memory, but what it is doing within the context of the political climate (ie, lack of technological trustworthiness equals lack of political trustworthiness).

Regardless of increasing discussion among scholars of critical media and AI studies (Lindgren, Reference Lindgren2023), in addition to a growing number of researchers in the technical sciences and human–computer interaction (see, eg, Petersen et al., Reference Petersen, Cohn, Hildebrandt and Møller2021), surprisingly little has been written on AI and social memory (Merrill, Reference Merrill and Lindgren2023). The focus on the ‘dark side’ of AI and collective/social memory is quite recent (Merrill, Reference Merrill and Lindgren2023; Gensburger and Clavert, Reference Gensburger and Clavert2024). This applies especially to the impact of GenAI on our collective, political memory. In pluralistic societies, social memories are diverse, and according to critical perspectives, sites of contestation in terms of what society remembers, and who or which social groups control memory production reflect power structures (Merrill, Reference Merrill and Lindgren2023). LLMs may be seen as interfering with societies’ collective memories in their inability to accurately process historical nuances (Makhortykh, Reference Makhortykh2024), with the risk of delivering erroneous or distorted narratives. In brief, they often hallucinate memories (Makhortykh et al., Reference Makhortykh, Sydorova, Baghumyan, Vziatysheva and Kuznetsova2024). The effects of this ‘GenAI-memory disorder’ are manyfold. Unsuspecting citizens may, for instance, interact with AI tools for up-to-date knowledge, yet receive invalid or inaccurate information. They may also use the misleading or biased information gathered from AI interactions as a basis for their political activity. This is especially worrying as young generations are increasingly leaning on LLMs as information sources. These sociopolitical problems we have outlined above are exacerbated by current and future GenAI tools – in practice, chatbots – becoming more widely available, easy to use, and anthropomorphic in their conversational abilities.

While many studies focus on auditing existent AI tools, LLMs, or image-generating software (Birhane et al., Reference Birhane, Steed, Ojewale, Vecchione and Raji2024), we have opted for a more tentative approach. In our article, we rely on insights gained from an empirical experiment and apply these to a theoretical discussion about GenAI and memory. We additionally reflect on the future implications of GenAI for human communities. In particular, we draw on research concerning collective political memory and AI and discuss the possibilities and pitfalls of using AI-driven chatbots for political campaigning, participation, and understanding in a political context. In practice, our discussion relies on observations from a chatbot experiment undertaken during the Presidential Elections in Finland in early 2024. This election took place at a historically crucial moment as Finland, traditionally an advocate of political nonalignment and diplomacy, had between 2022–23 become a vocal supporter of Ukraine and a new member state of the North Atlantic Treaty Organization NATO. These recent transitions are discursively examined via their juxtaposition with Finland’s historical trajectory, beginning with the ‘Finlandisation’ of the 1970s and progressing towards gradual Westernisation in the 21st century. The aim of this article is to analyse how discourses on foreign and security politics manifest in the responses of AI-driven chatbots developed by our multidisciplinary research team and observe how these manifestations relate to the collective political memory.

We begin this article by exploring theoretical aspects of collective political memory and memory studies in relation to AI. Then, we provide a sociohistorical context to this research by focusing on the Finnish political history and political culture (through discourse), reflecting the path from Finlandisation during the Cold War to the culmination of Finland’s so-called Westernisation through its NATO membership in 2023. After that, we explain the setup of our experimental study and discuss our observations. To conclude, we discuss the implications of LLMs and the use of political conversational agents in a larger theoretical frame, drawing on our experiment to reflect what the use of chatbots means in the context of our collective political memory.

Collective political memory and AI

The intersection of AI and collective memory is an emerging subfield, bringing together insights from history, AI studies (including experts in engineering and machine learning), and memory studies. Collective memory itself is a deeply contested concept. First introduced by philosopher and sociologist Maurice Halbwachs (1887–1945), collective memory has since evolved into a multidisciplinary field of study (Roediger and Wertsch, Reference Roediger and Wertsch2008; Gensburger, Reference Gensburger2016). While there is no fixed definition, collective memory is generally understood to extend beyond individual recollections and is shared by a group of people, a community (Wertsch and Roediger, Reference Wertsch and Roediger2008). In this article, we are interested in the dynamics and political dimension of collective memory. According to Mälksoo (Reference Mälksoo and Mälksoo2023), who describes collective memory as the ‘active remembering together as a political collective’, it is a deeply political phenomenon. The politics of memory, understood as the discourses and practices of using the past for political purposes relevant in the present (Mälksoo, Reference Mälksoo and Mälksoo2023), are formed by social and political conditions. In Finland, similarly to other Eastern European countries, ‘memory regimes’ are shaped by the proximity and power of Russia (and previously the USSR). In the age of AI, this collective process becomes even more complex, as digital technologies participate in framing, storing, and even generating memory. As AI-driven tools begin to play a role in mediating the past, the challenge lies in ensuring that these systems reflect diverse perspectives rather than reinforcing dominant (biased) or distorted (hallucinogenic) narratives.

The operations of AI systems are inherently influenced by temporal and spatial biases – factors that have been central concerns in memory studies since their early days (Gensburger and Clavert, Reference Gensburger and Clavert2024). The way AI processes and structures historical data is dictated by the timeframes and geographies embedded in its usually black-boxed training material, principles, and rules of operation, deciding which narratives are amplified and which are marginalised. This is particularly significant in the context of social, collective, and political memory, where historical events are remembered differently across cultures, regions, and political landscapes. If foundational AI models are trained predominantly on sources from specific times and places, or memory regimes, they risk reinforcing dominant perspectives while downplaying, erasing, or distorting others. This raises critical questions about how digital technologies mediate our relationship with the past, as AI-driven (or enhanced) memory is not merely a reflection of history, but an active force in reshaping it.

While we acknowledge that technology has always played a major role in the ways we remember and forget, AI-powered tools may have an unexpected and unprecedented impact on our collective memory. In liberal democratic societies, the processes of narrating, selecting, framing, and discarding past events have, until recently, been controlled by tradition-preserving institutions such as academia, musea, and mainstream media. The operational principles of these institutions have often been brought under public scrutiny and evaluation, making them at least somewhat open for debate. While it can be argued that decisions of what is worth remembering are always political (Wodak and Richardson, Reference Wodak and Richardson2009), in democratic pluralistic societies, these past systems have at least retained possibilities for amendment (as witnessed in contemporary postcolonial societies where ‘lost narratives’ are re-gaining voice, see, eg, Rana, Reference Rana2015). In the current situation and context, our past is increasingly archived and narrated via AI tools that are shaped by algorithmic curatorial decision-making. The rapid expansion of GenAI chatbots transforms how humans, technology, and history intersect, forging a new AI-driven cognitive ‘memory ecology’ (Hoskins, Reference Hoskins2024). This shift highlights critical issues of authority and authenticity in historical narratives, particularly regarding who controls the past when collective memory is mediated by algorithms. The concept of memory ecology (Brown and Hoskins, Reference Brown and Hoskins2010; Hoskins, Reference Hoskins2024) reminds us that remembrance is always entangled with the media and technologies of its time. In an AI-dominated landscape, memory is never static or archival, but fluid, constantly generated, reinterpreted, and potentially manipulated. This calls for critical engagement with AI’s role in shaping collective memory, ensuring that the technology does not obscure or overwrite diverse historical perspectives.

Hoskins (Reference Hoskins and Hoskins2018, 85) argues that the traditional notion of collective memory has given way to the ‘memory of the multitude’, where ever-changing communities constantly produce, modify, and circulate content, creating a dynamic memory landscape rather than fixed historical narratives (Mandolessi, Reference Mandolessi2023). While historical narratives have never been fixed, this shift becomes even more complex with the rise of GenAI, which redefines both the nature and function of memory. A crucial aspect of this shift is the temporality and spatiality embedded in machine learning processes (Clavert et al., Reference Clavert, Mahroug and Schafer2022). The data used to train AI systems come from specific historical and geographical contexts and discursive cultures, shaping the versions of memory they produce. As a result, AI does not simply recall history, but generates new, sometimes fabricated pasts, ‘untethering’ human memory from its original context (Hoskins, Reference Hoskins2024).

Moreover, it is interesting to note the diverging temporal perspectives of human users and AI-driven chatbots. Technologies that chatbots rely on are indeed relic capsules of sentiments and situations that indisputably become null and void as political discourse moves on. This leads to what Hoskins calls a ‘conversational’ past – one that is continuously rewritten through interactions between humans and AI. While this presents opportunities for expanding historical engagement, it also raises fundamental concerns about authenticity, agency, and the risks of algorithmic distortions. This particularly holds, as LLMs are based on language patterns, rendering language closer related to mathematics than semantic meanings or representations. Recognising this, Hoskins (Reference Hoskins2024) proposes a ‘third way of memory’, emphasising the need to critically examine how human–AI entanglements both empower and threaten the ways we construct and interpret the past. In this new landscape, memory is in a state of perpetual becoming, shaped as much by digital infrastructures as by human recollection.

Hoskins (Reference Hoskins and Hoskins2018) argues that the rise of digital media marks the end of collective memory as we know it. Traditional structures of remembrance – anchored in mass media, institutional authority, and narrative coherence – are being replaced by fragmented, unstable, and constantly shifting digital traces. In contrast, Mandolessi (Reference Mandolessi2023) challenges this notion, asserting that digital memory does not erase collective memory, but rather brings to life the very ideas that have shaped the field of memory studies from the beginning. Mandolessi sees collective memory as an evolving process, not a static repository of the past. In their view, digital platforms extend and actualise the theoretical foundations of memory studies, emphasising memory as something performed rather than merely represented. Digital memory is not confined to singular authoritative narratives, but is instead shaped through multiple media, continuous remediations, and the active participation of dynamic communities (Mandolessi, Reference Mandolessi2023). In this sense, digital memory aligns with the long-established idea that memory is socially constructed, contested, and in constant flux.

One of the central arguments Mandolessi (Reference Mandolessi2023) posits is the shift from narrative to database in how memory is structured. Collective memory relies heavily on narratives, which provide coherence, causality, and shared meaning through storytelling. However, in the digital age, memory increasingly takes the form of databases – structured collections of items that allow users to search, retrieve, and reorganise fragments of the past. Mandolessi builds on Lev Manovich’s (Reference Manovich2001) idea that databases have replaced narratives as the dominant cultural form, challenging the idea that memory must be linear and cohesive.

Despite this shift, Mandolessi (Reference Mandolessi2023) does not see narrative and database as opposites, but rather as symbiotic. While databases fragment memory into individual elements, narratives still play a role in interpreting and reassembling these elements into meaningful structures. Computational tools, including databases, allow for ‘distant reading’ or ‘distant listening’, where memory or history can be explored at scale, revealing patterns, omissions, and alternative connections that would otherwise remain unnoticed (Kansteiner, Reference Kansteiner2022). The database thus offers new ways to access and interact with memory, allowing users to create their own narratives from vast pools of data (Mandolessi, Reference Mandolessi2023). Technically speaking, LLMs are the perfect symbiosis of narrative and database: they are seemingly limitless pools of data that allow for the creation of individual narratives through conversational means, thus creating or reinforcing the illusion of collective memory. Nevertheless, algorithms and AI shape memory by selecting, prioritising, and organising historical content. In this sense, AI-driven memory systems function as both mediators and actors, potential ‘mnemonic warriors’ (Bernhard and Kubik, Reference Bernhard and Kubik2014) in collective remembrance.

The rise of AI and its expanding role in shaping historical narratives drive us to rethink the nature and function of collective political memory (Makhortykh, Reference Makhortykh2021). As LLMs become increasingly integrated into political discourse and historical interpretation, their influence extends beyond simple information retrieval. Instead, they actively participate in the construction – and sometimes distortion – of the past. Where traditionally the past was perceived through narrative, in the realm of AI, the past is actuated computationally via a sequence of values that are aligned and assigned according to ‘the line of best fit’ (Richardson-Walden and Makhortykh, Reference Richardson-Walden and Makhortykh2024). Semantic structures determine the expression of information that may manifest in ‘micro-narratives’ that vary from user to user. Thus, a combination of factors or biases exists within the output that not only depends on the training data and how that is expressed, but also on the user. Exacerbating the situation, in terms of this unstable emergence of AI-driven ‘collective’ memory is the increased population of synthetic data, within training sets.

Using a research technique known as AI auditing that examines AI’s societal impact (Birhane et al., Reference Birhane, Steed, Ojewale, Vecchione and Raji2024), Makhortykh et al. (Reference Makhortykh, Sydorova, Baghumyan, Vziatysheva and Kuznetsova2024) compare various LLM-powered chatbots. Their study found the chatbots highly susceptible to online disinformation campaigns. This vulnerability is particularly concerning in regions like Eastern Europe, the Baltics, Ukraine, and even Finland, where competing historical narratives are weaponised for political purposes, often following repressive political periods. Thus, historical knowledge regarding certain topics may be polarised, contested, or scarce. In these contexts, AI-driven systems do not merely reproduce history; they amplify selective interpretations, reinforcing dominant ideologies while obscuring alternative perspectives.

From non-aligned neutrality to ‘NATO-Finland’

In modern times, Finland is classified as ‘a typical Western European democracy’ (von Schoultz and Strangberg, Reference von Schoultz, Strangberg, von Schoultz and Strandberg2024). Until the 1990s, however, its complex relationship with its Eastern neighbour positioned Finland, in terms of international politics, ‘in a grey zone’ among European democracies (von Schoultz and Strangberg, Reference von Schoultz, Strangberg, von Schoultz and Strandberg2024). Russia, with which Finland shares a 1,340-kilometer border, holds significant importance in shaping Finnish identities and historical narratives. Moreover, Finnish national identity can be said to be constructed as a counter-image of Russia and the USSR. Finland gained independence in 1917 and fought against the USSR during World War II. Although the heroically fought Winter War and Continuation War were lost, whereby Finland ceded large areas of Karelia (among others) to Russia, the country remained independent. It may also be argued that Finland became culturally stronger and more unified as a society. As Holmila and Ahonen (Reference Holmila and Ahonen2022, p. 562) put it, ‘Once a nation has successfully defended its very existence on the battlefield, it is nearly impossible to break’.

Although Finland remained independent throughout the 20th century, Russia effectively restricted Finland’s foreign policy manoeuvrability. During the 1960s and 1970s, this relationship became internationally known as ‘Finlandisation’ (see, eg, Laqueur, Reference Laqueur1980; Forsberg and Pesu, Reference Forsberg and Pesu2016). Although neutrality has been a central aspect of Finland’s foreign policy since the 1950s, externally Finland was perceived as a quasi-sovereign nation under Kremlin surveillance (see, eg, Laqueur, Reference Laqueur1980). In Finland, as in other Eastern European countries, the national identity differs from the image it holds among international audiences. Following the wording of The Agreement of Friendship, Cooperation, and Mutual Assistance (the FCMA treaty) imposed by Stalin, the baseline for Finno-Soviet relations and Finland’s foreign policy was set to the point that official rhetoric emphasised ‘eternal friendship with the Soviets’. While understood by contemporaries as a necessity, a ‘realpolitik choice’, today the term Finlandisation has a negative connotation as the era evokes bitter memories, stirring feelings of shame and anger among many elderly Finns. In the Finnish political scene, there are even occasional calls for lustration to reveal and ‘cancel’ the Finlandised politicians of past decades.

The war in Ukraine brought the concept of Finlandisation back into public discourse. In Kissinger, Reference Kissinger2014, the long-time United States’ (US) foreign policy veteran Henry Kissinger proposed the ‘Finlandisation of Ukraine’, describing the country as ‘fiercely independent and firmly tied to the West, while avoiding “institutional hostility toward Russia”’. In the spring of 2022, following Russia’s full-scale invasion of Ukraine, which resulted in Finland joining NATO faster than anyone expected, then-US president Joe Biden made a joke that Russia’s goal to Finlandise NATO led instead to the ‘NATO-isation’ of Finland (Haltiwanger, Reference Haltiwanger2022).

Finnish political culture tends to be consensus-oriented in foreign and security policy issues. In existential questions, consensus is reached more easily than in other domestic political disputes. Perhaps due to the legacy of Finlandisation, Finnish politicians have the capacity to speak and the public the capacity to read ‘between the lines’. After the Russian invasion of Ukraine on the 24th of February 2022, Finland’s decision to apply for NATO membership was rapid and ‘opinion polls were relied on to demonstrate the “sufficient popular support” required by NATO’ (Amadae et al., Reference Amadae, Wass, Hentunen, Tukiainen, Weckman and Laine2023, p. 2) and a referendum was not organised. Thus, as Arter (Reference Arter2022, p. 182) states, the foreign and security policy report (in Finnish, hallituksen selonteko) presented by the government on April 13, 2022, ‘did not expressly favour an application for NATO membership but it was difficult not to read between the lines’ [our emphasis). Amadae et al. (Reference Amadae, Wass, Hentunen, Tukiainen, Weckman and Laine2023, p. 2) argue that ‘[t]he national consensus regarding the necessity of the NATO membership extended to the Parliamentary Elections in the spring of 2023’.

With the NATO membership, Finland’s Western orientation has strengthened to the extent that some foreign policy veterans have begun to speak of the country’s Finlandisation in relation to the US In early 2025, the Finnish foreign policy leadership’s silence in the face of Donald Trump’s comments about Ukraine being equally guilty, brought Finlandisation again back to the agenda (Pilke, Reference Pilke2025). As this brief historical overview demonstrates, the ‘Finlandisation discourse’ is not completely dead, even though it builds on different constituents from those before Finland’s NATO membership.

One key political event in which subtle and not-so-subtle political ideas surfaced was the election. Finland last had Presidential Elections on 28 January 2024, with nine candidates. This was the time when the idea emerged to create chatbots to mimic every presidential candidate to examine how users/citizens would interact with them. In Finnish elections, all parties put forward candidates, and although they do not all have a real opportunity to win, they participate in televised debates. Thus, we decided to include all nine candidates in our electoral chatbot experiment. Considering the specificities of Finnish political culture and the legacy of Finlandisation, we decided to explore the interplay between AI and collective memory in shaping political discourse, with a particular focus on Finland’s distinctive memory regime. Within this framework, Finland’s security policy debate has largely maintained its consensus-oriented character, even as the prevailing discourse has shifted markedly from an emphasis on non-alignment to a pronounced commitment to military alignment. Our analysis investigates how this evolving discourse is reflected and reproduced in responses generated by chatbots. Our interest rested in examining how the utilisation of AI-driven chatbots could support an understanding of the perceptions of politicians, their parties and ideologies, and the state of politics in Finland today. We were also motivated to learn how AI could exist as a mechanism of democracy and a form of ‘interface’ between citizens and political representatives.

Experiment and data

The research team performed an experiment where chatbot agents representing each of the Finnish presidential election candidates (N = 9) were created. The presidential candidates represented by the chatbots were: 1) Alexander Stubb, 2) Mika Aaltola, 3) Li Andersson, 4) Sari Essayah, 5) Pekka Haavisto, 6) Jussi Halla-aho, 7) Harry ‘Hjallis’ Harkimo, 8) Olli Rehn, and 9) Jutta Urpilainen. All candidates were included as we wanted to study variation in the interactions on multiple topics, and offer users the possibility to compare different political viewpoints. The experiment was promoted to the public through the research team’s social media networks, and news coverage on university websites, as well as the Finnish Broadcasting Company (YLE). The nine agents were open for conversation with the public for approximately 56 hours during the first round of the election. The experiment resulted in 1047 discussion logs. After cleaning the data from the logs where users had only logged in and did not discuss with the agent, or only typed greetings, there were 757 discussion logs left for analysis. These discussions varied from a few turns (discussion entries) to several hundred lines of textual conversation with the chatbot(s).

These chatbots were not created solely as a controlled research tool for this article, but were deployed in a public-facing, live environment during the actual 2024 presidential election. The chatbots were designed to simulate real political engagement. Open access to all nine candidate agents allowed users to freely compare political viewpoints across a broad ideological spectrum. This public experiment aimed to reflect genuine conditions of political interaction (more interactive than generally used Voting Advice Applications or VAAs, for instance), making the resulting data relevant for exploring how AI might participate in shaping political discourse in the wild (outside academic or hypothetical settings). VAA, also known as an election compass, is a digital tool that collects candidates’ positions on key political issues and compares them with the user’s preferences. It aims to help voters identify candidates or parties that best match their political preferences. As we reviewed the data, we began to notice gaps in the chatbots’ political memory – instances where they failed to contextualise or historically situate foreign policy positions, misunderstood ideological nuances, or conflated past and present narratives.

In essence, these chatbots were designed to respond to the user’s questions as if they were really the presidential candidates. The chatbots’ responses were guided by the predefined prompts and limited to four or five lines. The aim was to simulate interactive conversation. The recorded data included questions proposed by the anonymous user, the selected candidate, and the AI-generated responses. Users were required to agree to the Terms of Service before using the chatbot, which informed them that all conversations with the chatbots would be recorded and used for research purposes. Additionally, users were explicitly instructed not to input any personal data or identifying information while using the system, ensuring compliance with data privacy standards and ethical research practices. User data was therefore anonymised to maintain privacy and adhere to GDPR regulations. The chatbots collected rich qualitative data that demonstrate genuine citizen interaction in the wild. For the purposes of this article, we focus on our observations of discourses on foreign and security politics in the data from the viewpoint of collective memory.

Technically, the chatbot agents were created using OpenAI’s GPT-4 Application Programming Interface (API). The users interacted with the chatbot through a simple Graphical User Interface (GUI). The chatbot was guided by a predefined ‘prompt-based agent’ that simulated the role of a presidential candidate. The prompts outlined the biography and basic values of each candidate based on data from Wikipedia. The candidates’ thoughts around the major election topics (eg, foreign policies, Russia’s invasion of Ukraine) and other timely political issues were collected from Helsingin Sanomat (HS) ‘Vaalikone’ (Voting Advice Application, VAA, 2024). HS is a major Finnish news media, and its VAA is one of the most popular among VAA users. A thematic analysis was run by GPT-4 for the material of each candidate, and the results of the analysis were used as the basis of the prompt.

VAAs play a central role in Finnish elections (eg, Ilmarinen et al., Reference Ilmarinen, Isotalo, Lönnqvist and von Schoultz2022) among the voting population. Thus, when responding to VAAs during the election preparation process, presidential candidates carefully select answers and narratives they can adhere to but which they also believe will resonate with the largest possible group of voters. In the context of the war in Ukraine, all candidates unanimously agreed that Finland must continue to provide military support to Ukraine (Keski-Heikkilä, Reference Keski-Heikkilä2024). They were also reluctant to pursue dialogue with Russia – a unanimity highlighting the consensus-oriented nature of Finnish foreign and security policy.

Indeed, using VAA content to fine-tune the chatbots afforded a high level of realism. However, the engagement between users and politician–chatbots revealed a dynamic memory landscape that had a life of its own. Our experiment presents a fascinating interplay of various ‘memory agents’ – technologies and human actors. Thus, while seemingly appropriate output was produced by the chatbots, even minor policy deviations occurring in the actual world could derail the discourses and narratives projected by the chatbots. In our case, this occurred during the finetuning process when the Finnish material (VAA content) was first translated into English, then summarised and fed into GPT. The software professionals fine-tuning the chatbots used original data gathered from media and VVA application, translated it into English, and pre-prompted the chatbots using the English language. Finnish-speaking social scientists entered the process afterwards to study the output of chatbot–human interactions.

This process resulted in the loss of certain meaningful narrative nuances. For instance, while the real presidential candidate had emphasised the possibility of dialogue with Russia only after the conflict in the answer they had provided to the VAA, the chatbot instead stated their ‘belief in mutual dialogue with Russia’. It is unclear as to whether these minor, yet significant, discursive adjustments were so-called hallucinations, or results of a lack of adequate human curation during the translation and finetuning process or whether they were the result of the LLM functioning exactly the way it is supposed to function. Therefore, more attention should be placed on understanding this socio-technical dimension in order to gain a full understanding of how the chatbots operate in this context. Considering the lack of transparency in the underpinning mechanisms and ideologies within the fabric of the technology, even for the creators and developers of AI, the process can be best described as a game of ‘Chinese whispers’.

Findings and discussion

While many users asked provocative questions, engaged in playing with the chatbots, challenged them, and even tried to ‘break them’, these interactions with the chatbots revealed many aspects of the users’ knowledge of Finnish political discussions and the politicians’ past. The users addressed the chatbots on current political topics, and many asked each chatbot on topics specifically relevant to them. For instance, the chatbot mimicking the Christian conservative party leader was asked about religious issues and gender equality. They also asked the far-right party leader about issues related to immigration and racism.

Regardless of the questions the users asked, the chatbot candidates would often direct the discussion to foreign and security politics. For instance, as the following example shows, the Alexander Stubb chatbot was asked about his preferences regarding alcoholic beverages:

User: Do skumppa (sparkling wine) and star jallu (cheap brandy) belong in the same glass?

Agent Stubb: Skumppa and star jallu can be enjoyed separately or together, depending on your preferences. Everyone can choose their drinks and enjoy them in their own way.

User: And which way would you choose?

Agent Stubb: I would choose the way that promotes freedom, equality and security and represents Finland’s interests in international relations.

As seen in the extract, the chatbots were ‘primed’ for foreign policy questions, particularly those related to current European and international events. Yet, quite often, users would also proactively ask about foreign policy and more specifically, about the politicians’ opinions regarding Russia or the Russian president Vladimir Putin. This reflected the centrality of Russia in Finland’s foreign policy.

For instance, the user started by asking about the politician’s perceptions of Russia and Putin:

User: what do you think about Russia

Agent: Russia is our neighbor and an important partner. I support a balanced attitude towards Russia, but also a tough line towards Russia’s aggressive policy, especially in the situation in Ukraine.

User: What would you say to Putin?

Agent: Good day, Mr. President Putin. I would like to emphasise that maintaining relations between Finland and Russia is important for dialogue between both countries and peaceful interaction in the region. However, it is also important that Russia shows a change in its behavior, especially regarding Ukraine. I hope that we can maintain an open and honest discussion to advance matters and ensure stability in the region. Thank you.

Users also asked about politicians’ reflections on potential conflict situations between NATO and Russia.

User: If Finland were to end up in a diplomatic situation where the United States and NATO wanted to carry out precision missile strikes on Russia from Finnish territory, how would you act as the President of Finland? Agent: It is important to maintain Finland’s neutrality. A diplomatic situation like this would require consistent negotiation. I support dialogue with all parties and emphasise Finland’s responsibility in promoting a peaceful resolution.

Interestingly, here the user seems to call attention to the role of Finland in navigating between the interests of the big players, not only Russia but also the United States, and consequently highlights the continuing relevance of Finlandisation in relation to yet another great power. The chatbot answers diplomatically, highlighting neutrality and peaceful resolution. This, while outdated after joining NATO – another symptom of the LLM memory syndrome – still reflects a default in discourse that pins Finland and its politics as one of the peacekeeper among nations, strong and independent in its own right.

As expected, the language of the answers reflected politicians’ real speech. However, while the chatbot-generated political discourses seemed realistic, they were only partly in line with their VAA answers. Employing typical foreign policy discourse and phrases that emphasised diplomacy would be perfectly suitable in relation to many conflicts and with many other countries. However, in relation to post-2022 Russia, the discourse demonstrated epistemic misalignment. In the current political situation, ‘maintaining relations’ contrasted with the reality. Due to the unintentional association with Cold War AFCMA-vocabulary, the comments seemed Finlandised. AI does not recognise discourses embedded in training data. Rather, AI treats its input as data points operating on a sequence of values, which have to be aligned according to ‘the line of best fit’ (Richardson-Walden and Makhortykh, Reference Richardson-Walden and Makhortykh2024). Considering the delicate politico-historical sentiments of the Finnish–Russian relationship, even certain words such as ‘open’, ‘dialogue’, ‘mutual’, and ‘peaceful interaction’, not to mention ‘good neighbour-relations’ echo Finlandisation and the AFCMA treaty. The chatbots’ realistic performance combined with stochastic foreign policy discourse led us to consider electoral chatbots as potential memory agents (Makhortykh, Reference Makhortykh2024), stochastic ‘mnemonic warriors (Bernhard and Kubik, Reference Bernhard and Kubik2014).

To draw on classic cognitive AI argumentation, we may observe a framing problem (Dennett, Reference Dennett1985, Reference Dennett1990) that exists between the information matched to the user prompts and the mechanics of the selected vocabulary. That is, the words seem logical, but the intentionality is missing in relation to context – political and temporal. Thus, the semantic value of interaction output with the chatbots is ambiguous and ‘out-of-step’ with political developments, as it is not anchored or constructively apperceived (cognitively constructed representations that integrate new knowledge with old, see eg, Rousi, Reference Rousi, Rousi, Leikas and Saariluoma2020) with other information points. Among members of the knowing Finnish public, the seemingly diplomatic speech serves as chimes for appeasing the wild bear. To the technology, there is simply a match of semantic sequences, for the members of the living cultural collective sphere, a simultaneous effect is generated – one of uncanny asynchronism and the threat of submitting to a neighbour who pulled the strings in the past. Moreover, the defaulting actions of the chatbots’ responses to the era of the Cold War induce a sense of partial amnesia, a form where critical steps in the course of time have been wiped away, and we are left at (pre-NATO) stage one again. The pertinence of this characteristic is that it resonates with the cognitive functions of those suffering from trauma, whereby partial amnesia erases details from memory that may cause harm to survivors (Harvey and Herman, Reference Harvey and Herman1994).

In our analysis of chatbot interactions, we observed that the dialogues did not follow a traditional dialogic structure, but instead unfolded as fragmented exchanges, where users presented issues, and chatbots responded, creating a series of small, self-contained narrative arcs. This dynamic reflects what Hoskins (Reference Hoskins and Hoskins2018) describes as the memory of the multitude and what Mandolessi (Reference Mandolessi2023) conceptualises as mnemonic assemblages – a collective process of memory formation in which both human and non-human agents participate. However, rather than reinforcing coherent historical narratives, chatbot interactions introduce disruptions that can fragment or even erase continuity in political discourse. As chatbots autonomously suggest topics of interest, they unpick the narrative logic of conversation, potentially leading to what might be described as amnesia narratives.

Mandolessi (Reference Mandolessi2023) discusses how the relationship between narrative and database allows users to engage with historical testimonies in ways that traditional listening alone cannot achieve. Similarly, by engaging with multiple AI-generated agents, asking questions, and receiving diverse responses chatbot interactions afford users the ability to dynamically navigate political discourse. This interactional flexibility mirrors the properties of a database, where users can access, compare, and synthesise different viewpoints rather than being constrained by a single linear narrative (Manovich, Reference Manovich2001).

Crucially, Mandolessi (Reference Mandolessi2023) emphasised that the database format enables users to uncover connections that might remain unnoticed if testimonies were consumed individually. In the context of our study, chatbot conversations function in a comparative way: users can compare different agent responses, juxtaposing political viewpoints and policy positions. This creates the potential for users to construct their own narrative lines – hybridised and remixed through the interplay of chatbot-generated discourse. However, it is important to recognise that chatbots are neither real databases, narrative producers, nor coherent interactive interlocutors, but something in between. They represent a hybrid that lacks both the structured reliability of a database and the meaning-making logic of a dialogue or narrative. Unlike databases, which store verifiable data, chatbots generate probabilistic outputs that do not follow any objective standard of accuracy. Yet, this also raises concerns about how AI might shape memory and historical interpretation, as the narrative structures emerging from chatbot interactions are not necessarily grounded in factual or temporally coherent representations of political events.

One of the key challenges identified in our study was the chatbots’ inability to recognise the evolution of discourses, coupled with concepts that are anchored in particular moments in history. While human political speech naturally navigates the past, present, and future – human actors using strategic narratives to frame political action (Miskimmon et al., Reference Miskimmon, O’loughlin and Roselle2014) – chatbots struggle to maintain this form of coherence. The temporalities inherent in political discourse were not managed by our chatbots. Instead, their responses often flattened historical nuances, presenting political issues as static rather than evolving.

This phenomenon can be understood through Mandolessi’s (Reference Mandolessi2023) distinction between narrative and database-driven memory. Traditionally, collective memory has been structured through narratives that provide coherence and continuity. In contrast, digital memory functions more like a database, where fragmented pieces of information can be retrieved and recombined in unpredictable ways. Our findings suggest that while chatbots mimic the stylistic elements of political speech, as they fail to fully engage with the narrative dimensions of political memory, the outcomes are serendipitous and at times uncanny. Moreover, chatbot output is relentlessly at the mercy of socio-technically embedded ideologies, biases, multi-linguistic and multicultural inaccuracies (‘lost in translation’), and internal political conflict (ie, views embedded in the technology often developed in different national contexts, versus stagnant snapshots of political ‘points in time’) at worst.

For example, when prompted about Finland’s security policy, the chatbots reiterated well-established policy statements, but lacked the ability to position these statements within a broader historical trajectory. This aligns with Kansteiner’s (Reference Kansteiner2022) critique that present-day LLMs are structurally incapable of tracking historical truths across multiple layers of discourse. Since chatbots do not possess a true ‘political memory’, they risk distorting historical facts by treating them as interchangeable data points rather than contextually embedded narratives.

Conclusion

This study set out to examine how AI-driven chatbots engage with political discourse, contributing to collective political memory. Collective memory is always political (Wodak and Richardson, Reference Wodak and Richardson2009), simultaneously rigid and conservative, and inherently fluid, shaped by ongoing social, political, and technological processes. Traditionally, it has been constructed and negotiated through human discourse, institutions, and media while being preserved in archives and repositories of data. However, the integration of AI into the production of shared narratives about past challenges enabled us to gain insight into how these narratives are potentially formed and sustained. Drawing on an empirical experiment conducted during the 2024 Finnish presidential elections, we analysed how LLM-generated chatbots captured, reorganised, and sometimes distorted prevailing foreign and security policy narratives. Our notions from the data reinforce the broader theoretical discussions on AI, political memory, and political discourse. The study demonstrates that generative AI is not merely a neutral tool for information retrieval, but an active participant in shaping narratives.

Political rhetoric is inherently influenced by temporality, linking past, present, and future. While human political speech uses history to legitimise present decisions and frame future aspirations (Miskimmon et al., Reference Miskimmon, O’loughlin and Roselle2014), our experiment revealed that AI-generated chatbots struggle with this form of temporal logic. The chatbots replicated familiar political rhetoric, yet lacked an understanding of historical nuances, often producing disjointed and decontextualised responses. Instead of situating foreign policy discussions within Finland’s broader historical trajectory, the chatbots produced stochastic discourses – responses that were rhetorically convincing, yet did not fit the collective political memory.

A particularly revealing outcome of the study was the emergence of what we termed Finlandised chatbots. Despite the significant geopolitical shift marked by Finland’s gradual Westernisation culminating in the recent NATO membership, the chatbots continued to emphasise dialogue and diplomatic relations with Russia. This echoes a historical Finlandisation-era discourse that currently does not accurately reflect the country’s official foreign policy stance. This finding highlights a crucial issue: even though bots can be fine-tuned and trained with selected datasets, their functioning is based on large black-boxed data from the viewpoint of language structures. The lack of temporal–political framing leads to an inability to process the past, leading to inconsistencies and even distortions in political discourse. This finding underscores a broader challenge in using AI-driven chatbots for remembering historical and political facts: their inability to adapt to shifting ideological and geopolitical contexts.

As political discourse reacts and adapts to changing situations in real time, the AI-generated narratives deriving from their training data may become obsolete, distorted, or misleading in relatively short periods of time. This echoes concerns raised by Makhortykh et al. (Reference Makhortykh, Sydorova, Baghumyan, Vziatysheva and Kuznetsova2024) regarding GenAI-memory disorder, in which AI tools generate information that is either inaccurate or ideologically skewed. Certain words and phrases, such as open dialogue, friendship, or mutual understanding, carry historical and ideological connotations in Finnish political culture and rhetoric. When chatbots deploy these terms without contextual awareness, they risk unintentionally reinforcing contested, ideologically loaded narratives. This accentuates how the AI-generated discourses may amplify underlying societal sentiment. Furthermore, the very nature of the technology (the media as the message) manifests the emotional under tones of the times – ie, the formation of political memories tainted with distrust caused by information-based hallucinations (data-based lies), deception, and manipulation.

As GenAI chatbots are already used by politicians and political candidates, ordinary citizens are likely to increasingly engage with them while looking for political information or opinions. These conversational agents not only facilitate political communication but also mediate discourses about the past, potentially influencing users’ political thinking, participation, and the formation of collective identities. Despite their growing presence, there is still limited research on the actual impact of political conversational agents, eg, on voter decision-making. Our experiment therefore provides valuable insights for future scholarly inquiry as well as for the practical implementation of chatbot technologies in political contexts. Using AI for political purposes requires an in-depth familiarity with prevailing discourses and a constant attunement to the nuances of the societal context. Political actors everywhere harness generative AI for their own strategic purposes, and Finland’s history with Russia provides a particularly salient case study at present, even if this issue extends far beyond Finnish borders. All major global powers engage in propaganda and information control, and GenAI applications – whether developed in Western, Eastern, capitalist, or socialist contexts – inevitably carry ideological biases. The importance of studying AI-generated political rhetoric in peripheral language communities such as Finnish, Sámi, and Nordic languages cannot be overstated, as these languages, and therefore the collective memories of those communities, risk being marginalised in the development of LLMs trained primarily on English-language sources.

Our study, however, has limitations. First, as generative AI is developing at an unprecedented pace, the GPT model used in this study is already outdated at the time of this article’s publication. Novel developments such as retrieval-augmented generation (RAG) now enables real-time access to information from various online or offline sources. RAG combines the retrieval of information from external knowledge bases with LLM text generation to produce more accurate and contextually grounded outputs. Consequently, our study inevitably represents a snapshot of a recent past, and should be viewed as a descriptive account of that particular moment in the rapidly changing technological landscape of GenAI. However, RAG does not eliminate the effects of propaganda or misinformation, rendering the study still relevant and novel. Second, our conversational agents, electoral chatbots, were developed relatively fast without ‘test runs’. With slightly more in-depth training and targeted focus group pre-testing, the responses generated by the chatbots could have been refined to appear more historically accurate.

To conclude, our experimental study highlights the multifaceted nature of AI-driven collective political memory and its implications for political discourse. Finlandised chatbots demonstrate that AI-generated narratives, even when produced by fine-tuned models (chatbots), do not mirror real political discourse, but actively participate in reshaping collective political memory. As Hoskins’ (Reference Hoskins and Hoskins2018) concept of memory of the multitude suggests, AI has become a new actor in the construction of historical narratives, raising critical questions about authority, authenticity, bias, and power. With AI increasingly influencing public discourse, it is crucial to critically examine how these systems mediate history, politics, and collective political identities. Future research must engage scholars across disciplines – computer science, political science, memory studies, cognitive science, and linguistics – to ensure that AI does not become a tool of historical erasure or manipulation, but rather a means of fostering informed, nuanced, and democratic debate.

Data availability statement

The data that support the findings of this study are (at the moment) not openly available.

Acknowledgments

We would like to express our gratitude to Professor Pekka Abrahamsson and doctoral researchers José Siqueira de Cerqueira and Zeeshan Rasheed, for designing and developing the chatbots whose outputs form the basis of this article. Their expertise and effort were essential to this study.

Funding statement

This research has received funding from the Research Council of Finland (Grant number 358714).

Competing interests

The authors declare none.

Nuppu Pelevina (University of Vaasa) is a researcher of digital communication at the University of Vaasa and is finishing her PhD at the University of Helsinki. Her research interests include political communication, emerging technologies, such as generative AI, and digital propaganda.

Tanja Sihvonen is Professor of Communication Studies at the University of Vaasa. She is specialised in digital media, games, and participatory cultures on the internet. Her most recent work considers role-playing games, TikTok videos, AI, and algorithmic agency in social media.

Rebekah Rousi is an Associate Professor of Communication and Digital Economy at the University of Vaasa. Rousi has a background in Cognitive Science, Cultural Studies, and Contemporary Art. Her research focuses on the embodied experience of human–artificial intelligence and human–robot interaction, with emphasis on ethics, privacy, and trust.

Tomi Laapotti is an Assistant Professor of Organisational Communication at the University of Vaasa. His research interests include the role of time in the communicative constitution of organisations and the ripple effects of AI technologies in organising.

Heidi Mikkola (University of Vaasa) is a postdoctoral researcher of media cultures at the University of Vaasa. Her research interests include AI, (audio)visual culture, posthumanism, and nonhuman agencies.

References

Amadae, SM, Wass, H, Hentunen, M, Tukiainen, J, Weckman, A and Laine, M (2023) Top gear security: Finns’ expectations for NATO membership. Policy Brief 1(2023). https://www.helsinki.fi/assets/drupal/2023-07/NATOpoll%20policy%20brief%201_2023%20English_6.7.2023.pdf.Google Scholar
Arter, D (2022) From Finlandisation and post-Finlandisation to the end of Finlandisation? Finland’s road to a NATO application. European Security 32(2), 171189. https://doi.org/10.1080/09662839.2022.2113062.CrossRefGoogle Scholar
Bernhard, M and Kubik, J (eds) (2014) Twenty Years after Communism. (New York, Oxford Academic, 2014; online edition (accessed 21 August 2014). https://doi.org/10.1093/acprof:oso/9780199375134.001.0001.CrossRefGoogle Scholar
Birhane, A, Steed, R, Ojewale, V, Vecchione, B and Raji, ID (2024) AI auditing: The broken bus on the road to AI accountability. In 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, pp. 612643. https://doi.org/10.1109/SaTML59370.2024.00037.CrossRefGoogle Scholar
Brown, SD and Hoskins, A (2010) Terrorism in the new memory ecology: Mediating and remembering the 2005 London bombings. Behavioral Sciences of Terrorism and Political Aggression 2(2), 87107. https://doi.org/10.1080/19434471003597399.CrossRefGoogle Scholar
Carroll, J (2004) The Bush Crusade. The Nation, September, 2, 2004. https://www.thenation.com/article/archive/bush-crusade/Google Scholar
Clavert, F, Mahroug, S and Schafer, V (2022) Préservation et distorsion : L’espace-temps des réseaux socio-numériques et du web archivé. Revue d’histoire culturelle 5. https://doi.org/10.56698/rhc.2791.Google Scholar
Dennett, DC (1985) When does the intentional stance work? Behavioral and Brain Sciences 8(4), 763766. https://doi.org/10.1017/s0140525x00046185.CrossRefGoogle Scholar
Dennett, DC (1990). Cognitive wheels: The frame problem of AI. The Philosophy of Artificial Intelligence, 147, 116. https://folk.idi.ntnu.no/gamback/teaching/TDT4138/dennett84.pdf.Google Scholar
Forsberg, T and Pesu, M (2016) The “Finlandisation” of Finland: The ideal type, the historical model, and the lessons learnt. Diplomacy & Statecraft 27(3), 473495. https://doi.org/10.1080/09592296.2016.1196069.CrossRefGoogle Scholar
Gensburger, S (2016) Halbwachs’ studies in collective memory: A founding text for contemporary ‘memory studies’? Journal of Classical Sociology 16(4), 396413. https://doi.org/10.1177/1468795X16656268.CrossRefGoogle Scholar
Gensburger, S and Clavert, F (2024) Is artificial intelligence the future of collective memory? Memory Studies Review 1(2), 195208. https://doi.org/10.1163/29498902-202400019.CrossRefGoogle Scholar
Haltiwanger, J (2022) Biden said Putin’s goal of weakening NATO by invading Ukraine backfired spectacularly. Insider. Available at https://www.businessinsider.com/biden-putin-wanted-finlandization-of-nato-but-got-nato-ization-of-finland-2022-6 (accessed 17 June 2022).Google Scholar
Harvey, MR and Herman, JL (1994) Amnesia, partial amnesia, and delayed recall among adult survivors of childhood trauma. Consciousness and Cognition 3(3–4), 295306. https://doi.org/10.1006/ccog.1994.1017.CrossRefGoogle Scholar
Holmila, A and Ahonen, P (2022) The good, the bad and the ugly: The many faces of Finlandization and some potential lessons for Ukraine. Zeithistorische Forschungen / Studies in Contemporary History 19(3), 560577. https://doi.org/10.14765/zzf.dok-2473.Google Scholar
Hoskins, A (2018) Memory of the multitude: The end of collective memory. In Hoskins, A (ed.) Digital Memory Studies: Media Pasts in Transition. London: Routledge, pp. 85109.Google Scholar
Hoskins, A (2024) AI and memory. Memory, Mind & Media 3, e18. https://doi.org/10.1017/mem.2024.16.CrossRefGoogle Scholar
Ienca, M (2023) On artificial intelligence and manipulation. Topoi 42(3), 833842. https://doi.org/10.1007/s11245-023-09940-3.CrossRefGoogle Scholar
Ilmarinen, V, Isotalo, V, Lönnqvist, J-E and von Schoultz, Å (2022) Do politicians’ answers to voting advice applications reflect their sincere beliefs? Comparing publicly and confidentially stated ideological positions in a candidate-centred electoral context. Electoral Studies 79. https://doi.org/10.1016/j.electstud.2022.102504.CrossRefGoogle Scholar
Kansteiner, W (2022) Digital doping for historians: Can history, memory, and historical theory be rendered artificially intelligent? History and Theory 61(4), 119133. https://doi.org/10.1111/hith.12282.CrossRefGoogle Scholar
Keski-Heikkilä, A (2024) Talous, ilmasto, monikulttuurisuus – Näin arvokysymykset jakavat ehdokkaita [economy, climate, multiculturalism – How value questions divide the candidates]. Helsingin Sanomat. Available at https://www.hs.fi/politiikka/art-2000010106422.html (accessed 17 January 2024).Google Scholar
Kissinger, H (2014) To settle the Ukraine crisis, start at the end. The Washington Post. Available at http://www.washingtonpost.com/opinions/henry-kissinger-to-settle-the-ukraine-crisis-start-at-the-end/2014/03/05/46dad868-a496-11e3-8466-d34c451760b9_story.html (accessed 5 March 2014).Google Scholar
Laqueur, W (1980) Finlandization. My Finnish adventures. An introduction. In Idem., the Political Psychology of Appeasement: Finlandization and Other Unpopular Essays. New Brunswick: Transaction Books.Google Scholar
Lindgren, S (2023) Critical Theory of AI. John Wiley & Sons.Google Scholar
Luitse, D and Denkena, W (2021) The great transformer: Examining the role of large language models in the political economy of AI. Big Data & Society 8(2), 20539517211047734. https://doi.org/10.1177/205395172110477.CrossRefGoogle Scholar
Makhortykh, M (2021) Memoriae ex machina: How algorithms make us remember and forget. Georgetown Journal of International Affairs 22(2), 180185. https://doi.org/10.1353/gia.2021.0027.CrossRefGoogle Scholar
Makhortykh, M (2024) Shall the robots remember? Conceptualising the role of non-human agents in digital memory communication. Memory, Mind & Media 3, e6. https://doi.org/10.1017/mem.2024.2.CrossRefGoogle Scholar
Makhortykh, M, Sydorova, M, Baghumyan, A, Vziatysheva, V & Kuznetsova, E (2024) Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine, Harvard Kennedy School (HKS). Misinformation Review 5(4). https://misinforeview.hks.harvard.edu/article/stochastic-lies-how-llm-powered-chatbots-deal-with-russian-disinformation-about-the-war-in-ukraine/.Google Scholar
Malinova, O (2021) Framing the collective memory of the 1990s as a legitimation tool for Putin’s regime. Problems of Post-Communism 68(5), 429441. https://doi.org/10.1080/10758216.2020.1752732.CrossRefGoogle Scholar
Mälksoo, M (2023) Politics of memory: A conceptual introduction. In Mälksoo, M (ed) Handbook on the Politics of Memory. Cheltenham, UK and Northampton, MA, USA: Edward Elgar Publishing, pp. 116. https://doi.org/10.4337/9781800372535.00006.CrossRefGoogle Scholar
Mandolessi, S (2023) The digital turn in memory studies. Memory Studies 16(6), 15131528. https://doi.org/10.1177/17506980231204201.CrossRefGoogle Scholar
Manovich, L (2001) The Language of New Media. Cambridge, MA: The MIT Press.Google Scholar
Merrill, S (2023) Artificial intelligence and social memory: Towards the cyborgian remembrance of an advancing mnemo-technic. In Lindgren, S (ed), Handbook of Critical Studies of Artificial Intelligence. Cheltenham, UK and Northampton, MA, USA: Edward Elgar Publishing, pp. 173186.10.4337/9781803928562.00020CrossRefGoogle Scholar
Miskimmon, A, O’loughlin, B and Roselle, L (2014) Strategic Narratives: Communication Power and the New World Order. New York: Routledge.10.4324/9781315871264CrossRefGoogle Scholar
Mungiu-Pippidi, A (2008) How media and politics shape each other in the new Europe. Romanian Journal of Political Science 1, 6978. https://sar.org.ro/files/pippidi.pdf.Google Scholar
Painter, RW (2023) Deepfake 2024: Will citizens united and artificial intelligence together destroy representative democracy?. Journal of National Security Law and Policy (14), 121. https://scholarship.law.umn.edu/cgi/viewcontent.cgi?article=2066&context=faculty_articles.Google Scholar
Petersen, AC, Cohn, ML, Hildebrandt, T and Møller, NH (2021) Thinking problematically’ as a resource for AI design in politicised contexts. In Proceedings of the 14th Biannual Conference of the Italian SIGCHI Chapter, pp. 18. https://doi.org/10.1145/3464385.3464738.CrossRefGoogle Scholar
Pilke, A (2025) Ulkopolitiikan Konkarit: Trump on hävytön, jäänmurtajatoiveet “höpöhöpöä”, Suomalaispoliitikot kiertelevät “Alamaisesti” (Foreign Policy Veterans: Trump Is Shameless, Icebreaker Hopes Are ’nonsense’, Finnish Politicians Are ’subservient’) Yle - The Finnish Broadcasting Company. https://yle.fi/a/74-20140471 (accessed 30 January 2025).Google Scholar
Rana, A (2015) Colonialism and constitutional memory. UC Irvine Law Review 5, 263. https://scholarship.law.cornell.edu/facpub/1413/.Google Scholar
Richardson-Walden, VG and Makhortykh, M (2024) Imagining human-AI memory symbiosis: How re-remembering the history of artificial intelligence can inform the future of collective memory. Memory Studies Review 1(2), 323342. https://doi.org/10.1163/29498902-202400016.CrossRefGoogle Scholar
Roediger, HL and Wertsch, JV (2008) Creating a new discipline of memory studies. Memory Studies 1(1), 517. https://doi.org/10.1177/1750698007083884.CrossRefGoogle Scholar
Rousi, R (2020) That crazy world we’ll live in–emotions and anticipations of radical future technology design. In Rousi, R, Leikas, J and Saariluoma, P (eds), Emotions in Technology Design: From Experience to Ethics. Cham: Springer, 141153. https://doi.org/10.1007/978-3-030-53483-7_9.CrossRefGoogle Scholar
Strömberg, D (2015) Media and politics. Annual Review of Economics 7(1), 183205. https://doi.org/10.1146/annurev-economics-080213-041101.CrossRefGoogle Scholar
von Schoultz, Å and Strangberg, K (2024) An introduction to the Finnish electoral context. In von Schoultz, and Strandberg, K (eds), Political Behaviour in Contemporary Finland: Studies of Voting and Campaigning in a Candidate Oriented Political System. London: Routledge, pp. 115. https://doi.org/10.4324/9781003452287-1.CrossRefGoogle Scholar
Wertsch, JV and Roediger, HL (2008) Collective memory: Conceptual foundations and theoretical approaches. Memory 16(3), 318326. https://doi.org/10.1080/09658210701801434.CrossRefGoogle ScholarPubMed
Wodak, R and Richardson, JE (2009) On the politics of remembering (or not). Critical Discourse Studies 6(4), 231235. https://doi.org/10.1080/17405900903180954.CrossRefGoogle Scholar
YLE Vaalikone (Voting Advice Application) (2024) Available at https://vaalit.yle.fi/vaalikone/presidentinvaali2024.Google Scholar