Abstract

With the breakthrough of computer vision and deep learning, there has been a surge of realistic-looking fake face media manipulated by AI such as DeepFake or Face2Face that manipulate facial identities or expressions. The fake faces were mostly created for fun, but abuse has caused social unrest. For example, some celebrities have become victims of fake pornography made by DeepFake. There are also growing concerns about fake political speech videos created by Face2Face. To maintain individual privacy as well as social, political, and international security, it is imperative to develop models that detect fake faces in media. Previous research can be divided into general-purpose image forensics and face image forensics. While the former has been studied for several decades and focuses on extracting hand-crafted features of traces left in the image after manipulation, the latter is based on convolutional neural networks mainly inspired by object detection models specialized to extract images’ content features. This paper proposes a hybrid face forensics framework based on a convolutional neural network combining the two forensics approaches to enhance the manipulation detection performance. To validate the proposed framework, we used a public Face2Face dataset and a custom DeepFake dataset collected on our own. Experimental results using the two datasets showed that the proposed model is more accurate and robust at various video compression rates compared to the previous methods. Throughout class activation map visualization, the proposed framework provided information on which face parts are considered important and revealed the tempering traces invisible to naked eyes.

Highlights

  • Given advances in computer vision and deep learning, fake face media aiming at impersonating target subjects has surged

  • We propose a face forensics model that mashes up the conventional image forensic approach and the fake face image forensic approach

  • DATASETS For the experiments, we used two fake face datasets that were created by using Face2Face [5] and DeepFake [4] techniques

Read more

Summary

Introduction

Given advances in computer vision and deep learning, fake face media aiming at impersonating target subjects has surged. In June 2019, for example, Mark Zuckerberg became the newest victim of AI-manipulated media [1]. The forged media may be uploaded on social media to propagate fake information, which may have serious moral, ethical, and legal implications. DeepFake has been abused to make fake pornography by putting a victim’s face on a naked body, which spreads on the Internet. This raises significant social issues and concerns.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.