Published online by Cambridge University Press: 14 June 2025
This chapter aims to question the emergence of new ‘fact-checking’ epistemic authorities in the generation of deepfakes by presenting a novel outlook that relies on Wittgensteinian notions. This chapter discusses the related philosophical issues based on three assumptions. The first assumption explains that deepfakes are a subset of technological games that employ unprecedented rules. The second assumption considers the video as a possible indicator of truth rather than a necessary one. The third assumption postulates that a video exhibits depth and surface grammatical properties, which play an essential role in deepfake detection. Based on these assumptions, this chapter derives two conclusions. First, the immediate epistemological problem of deepfakes arises from our loss of autonomy regarding our justification for a digital video. Then, there is an ethical problem that accompanies the epistemological one, which indicates an inevitable shift of focus from deepfake regulation to regulation of regulating authorities.
Deepfakes in a Nutshell
Deepfake is an amalgamation of the concepts ‘deep learning’ and ‘fake’, and according to Nina Schick, it is defined as one of the synthetic media, that is ‘media (including images, audio, and video) that is either manipulated or wholly generated by AI’ (2021, 8). Nevertheless, deepfakes are derived from forged images. Apart from an agent's intentions, a deepfake is a deployment of different technological advancements to manipulate the given information in the video's content or context. Britt Paris and Joan Donovan differentiate between deepfakes and other kinds of image forgeries based on the technical merits and required expertise level (2019, 10). As the state-of-the-art technique deployed on image manipulation, many AI-powered deep-learning methods are developed to create deepfakes. One of the technological milestones − Generative Adversarial Nets (henceforth, GANs) − used in deepfakes emerged in 2014, coined by Ian J. Goodfellow and friends (see Goodfellow et al. 2015). However, a wider attention on deepfakes entered the scene around late 2017.
A well-cited 2019 report from Deeptrace unveils that there were 14,678 deepfakes online, and 96 per cent of them had pornographic content (Ajder et al. 2019). Besides pornography and other harmful uses, deepfakes are deployed to create artistic or historical performances, such as Microsoft's ‘The Next Rembrandt’ (news .microsoft . com 2017) and the ‘JFK Unsilenced’ (accenture.com n.d.) projects (see Floridi 2018 for a lengthy discussion).
To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.