Abstract

ML-synthesized face samples, frequently called DeepFakes, is a serious issue menacing the integrity of information on the Internet and face recognition systems. One of the main defenses against face manipulations is DeepFakes detection. In this paper, we first created a new DeepFakes dataset using a publicly available MUCT database, which contains diverse set of facial manipulations. In particular, we employed smartphone FaceApp with eleven different filters (i.e., every filter concurs with a different facial manipulation) such as gender conversion, face swapping, tattoo and hair style changes. Deep learning features have recently demonstrated magnificent performances in various real-world applications. Therefore, with collected dataset, we study the efficiency of deep features for identifying the DeepFakes under different scenarios. We performed a rigorous and comparative analysis of a convolutional neural networks (CNNs) model and immensely utilized deep architectures such as VGG16, SqueezNet, DenseNet, ResaNet, and GoogleNet via transfer learning for face manipulation detection. Empirical results show that deep features based DeepFakes detection systems attain notable accuracies when trained and tested on same kind of manipulation. But their performances drop drastically when they encounter with novel manipulation type that was not used during the training stage, thereby having low generalization capability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call