Abstract

ABSTRACT “Deepfake” is a form of machine learning that creates fake videos by superimposing the face of one person on to the body of another in a new video. The technology has been used to create non-consensual fake pornography and sexual imagery, but there is concern that it will soon be used for politically nefarious ends. This study seeks to understand how the news media has characterized the problem(s) presented by deepfakes. We used discourse analysis to examine news articles about deepfakes, finding that news media discuss the problems of deepfakes in four ways: as (too) easily produced and distributed; as creating false beliefs; as undermining the political process; and as non-consensual sexual content. We provide an overview of how news media position each problem followed by a discussion about the varying degrees of emphasis given to each problem and the implications this has for the public’s perception and construction of deepfakes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call