Abstract

Although manipulations of visual and auditory media are as old as media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by the latest technological advances in artificial intelligence and machine learning, deepfakes offer automated procedures to create fake content that is harder and harder for human observers to detect. The possibilities to deceive are endless—including manipulated pictures, videos, and audio—and organizations must be prepared as this will undoubtedly have a large societal impact. In this article, we provide a working definition of deepfakes together with an overview of its underlying technology. We classify different deepfake types and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to ensure deniability, Expose deepfakes early, Advocate for legal protection, and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter deepfake tricks as we appreciate deepfake treats.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call