Abstract

Audio and video footage produced with the help of AI can show politicians doing discreditable things that they have not actually done. This is deepfaked material. Deepfakes are sometimes claimed to have special powers to harm the people depicted and their audiences—powers that more traditional forms of faked imagery and sound footage lack. According to some philosophers, deepfakes are particularly “believable,” and widely available technology will soon make deepfakes proliferate. I first give reasons why deepfake technology is not particularly well suited to producing “believable” political misinformation in a sense to be defined. Next, I challenge claims from Don Fallis and Regina Rini about the consequences of the wide availability of deepfakes. My argument is not that deepfakes are harmless, but that their power to do major harm is highly conditional in liberal party political environments that contain sophisticated mass-media.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call