Abstract
Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given that barn cases have posed a long-standing challenge for virtue-theoretic accounts of knowledge, I consider whether a similar challenge extends to deepfakes. In doing so, I consider how Duncan Pritchard’s recent anti-risk virtue epistemology meets the challenge. While Pritchard’s account avoids problems in traditional barn cases, I claim that it leads to local scepticism about knowledge from online videos in the case of deepfakes. I end by considering how two alternative virtue-theoretic approaches might vindicate our epistemic dependence on videos in an increasingly digital world.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Synthese
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.