Abstract

Deep fakes, a class of AI generated audio-visual materials designed to appear as an authentic record of actual speech, garnered increasing attention as worries of foreign disinformation campaigns and so called “fake news” have increased. Some critics raise concerns that this new technology will be too realistic to differentiate fact from fiction, allowing bad actors to manipulate elections, induce societal unrest, and incite panic. In this view, the influx of deep fake content may lead to the death of trust in media outright, as people will assume all content may be artificially-generated “fake news.” Yet close consideration of the hypotheses put forth so far reveals an unstated assumption that has not yet received attention: that deep fakes, once they are technologically advanced and easy to produce, will either be believed without question or will fundamentally shift public perceptions of video such that even real ones will be dismissed. This paper aims to fill the gap in the literature by critiquing the assumption that deep fakes, in the end, will necessarily fool the public into believing lies or rejecting truth. In fact, the likely societal reaction outcome may lie somewhere in the middle, in which society develops proxy mechanisms for assessing the reliability of video evidence in the wake of deep fake technology. That likelihood is based on two classes of observations. First, the reason we trust images and video may stem largely from societal norms about the use of the medium, rather something inherent to the medium itself. Second, history has shown that similar concerns about digital photo editing techniques did not lead to either of the outcomes predicted for societal perceptions of truth; how society reacted to fake photos sheds much light on what is likely to happen with regard to deep fake videos. Identifying the likely and less-drastic social trends in reaction to deep fakes is exceptionally important today, because ongoing fears of the technology have prompted calls for regulatory responses. For example, many propose amending Section 230 of the Communications Decency Act (“CDA 230”) to increase liability on platforms who do not take reasonable steps to limit the spread of deep fake content on their platforms. To the extent that there is a likely scenario in which ordinary operations of society are likely to manage the impact of deep fakes on perceptions of truth, the need for policy responses (that no doubt will be imperfect and potentially detrimental to valuable technological advances) is strongly lessened. This paper proceeds by covering two main areas: emerging technologies and online platform regulation. It first explains the likely reaction to deep fakes by reviewing the development of similar technologies as well as the key distinctions from technologies of the past. Second, the paper examines whether regulatory responses to deep fakes, focusing primarily on calls to amend CDA 230, are necessary or whether existing regulatory tools and free market forces will be sufficient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.