Given the potential of generative artificial intelligence (GenAI) to create human clones, it is not surprising that chatbots have been implemented in politics. In a turbulent political context, these AI-driven bots are likely to be used to spread biased information, amplify polarisation, and distort our memories. Large language models (LLMs) lack ‘political memory’ and cannot accurately process political discourses that draw from collective political memory. We refer to research concerning collective political memory and AI to present our observations of a chatbot experiment undertaken during the Presidential Elections in Finland in early 2024. This election took place at a historically crucial moment, as Finland, traditionally an advocate of neutrality and peacefulness, had become a vocal supporter of Ukraine and a new member state of NATO. Our research team developed LLM-driven chatbots for all presidential candidates, and Finnish citizens were afforded the chance to engage with these chatbot–politicians. In our study, human–chatbot discussions related to foreign and security politics were especially interesting. While rhetorically very typical and believable in light of real political speech, chatbots reorganised prevailing discourses generating responses that distorted the collective political memory. In actuality, Russia’s full-scale invasion of Ukraine had drastically changed Finland’s political positioning. Our AI-driven chatbots, or ‘electobots’, continued to promote constructive dialogue with Russia, thus earning our moniker ‘Finlandised Bots’. Our experiment highlights that training AI for political purposes requires familiarity with the prevailing discourses and attunement to the nuances of the context, showcasing the importance of studying human–machine interactions beyond the typical viewpoint of disinformation.