Abstract

Recent advancements in computer vision processing need potent tools to create realistic deepfakes. A generative adversarial network (GAN) can fake the captured media streams, such as images, audio, and video, and make them visually fit other environments. So, the dissemination of fake media streams creates havoc in social communities and can destroy the reputation of a person or a community. Moreover, it manipulates public sentiments and opinions toward the person or community. Recent studies have suggested using the convolutional neural network (CNN) as an effective tool to detect deepfakes in the network. But, most techniques cannot capture the inter-frame dissimilarities of the collected media streams. Motivated by this, this paper presents a novel and improved deep-CNN (D-CNN) architecture for deepfake detection with reasonable accuracy and high generalizability. Images from multiple sources are captured to train the model, improving overall generalizability capabilities. The images are re-scaled and fed to the D-CNN model. A binary-cross entropy and Adam optimizer are utilized to improve the learning rate of the D-CNN model. We have considered seven different datasets from the reconstruction challenge with 5000 deepfake images and 10000 real images. The proposed model yields an accuracy of 98.33% in AttGAN <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><i>a</i></sup> , 99.33% in GDWCT <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><i>b</i></sup> , 95.33% in StyleGAN, 94.67% in StyleGAN2, and 99.17% in StarGAN <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><i>c</i></sup> real and deepfake images, that indicates its viability in experimental setups.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call