Abstract

The rapid progress of sophisticated image editing tools has made it easier to manipulate original face images and create fake media content by putting one’s face to another. In addition to image editing tools, creating natural-looking fake human faces can be easily achieved by Generative Adversarial Networks (GANs). However, malicious use of these new media generation technologies can lead to severe problems, such as the development of fake pornography, defamation, or fraud. In this paper, we introduce a novel Handcrafted Facial Manipulation (HFM) image dataset and soft computing neural network models (Shallow-FakeFaceNets) with an efficient facial manipulation detection pipeline. Our neural network classifier model, Shallow-FakeFaceNet (SFFN), shows the ability to focus on the manipulated facial landmarks to detect fake images. The detection pipeline only relies on detecting fake facial images based on RGB information, not leveraging any metadata, which can be easily manipulated. Our results show that our method achieves the best performance of 72.52% in Area Under the Receiver Operating Characteristic (AUROC), gaining 3.99% F1-score and 2.91% AUROC on detecting handcrafted fake facial images, and 93.99% on detecting small GAN-generated fake images, gaining 1.98% F1-score and 10.44% AUROC compared to the best performing state-of-the-art classifier. This study is targeted for developing an automated defense mechanism to combat fake images used in different online services and applications, leveraging our state-of-the-art hand-crafted fake facial dataset (HFM) and the neural network classifier Shallow-FakeFaceNet (SFFN). In addition, our work presents various experimental results that can help guide better applied soft computing research in the future to effectively combat and detect human and GAN-generated fake face images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call