Abstract
As a form of synthetic media built on the Internet’s extensive visual datasets with evolving machine learning techniques, deepfakes raise the specter of new types of informational harms and possibilities for image-based abuse. There are calls for three types of defensive response: regulation, technical controls, and improved digital or media literacy. Each is problematic by itself. This article asks what kind of literacy can address deepfake harms, proposing an artificial intelligence (AI) and data literacy framework to explore the potential for social learning with deepfakes and identify sites and methods for intervening in their cultures of production. The article applies contextual qualitative content analysis to explore the most popular GitHub repositories and YouTube accounts teaching “how to deepfake.” The analysis shows that these sites contribute to socializing AI and establishing cultures of social learning, offering potential sites of intervention and pointing to new methods for addressing AI and data harms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.