Abstract

Fake videos have been in circulation on mainstream media since long. However, with increased popularity of online social networks, it is becoming many times easier to spread such videos and achieve virality. Recent advancements in deep learning has further fuelled this menace as the so called deepfake videos are hard to differentiate from the genuine ones. While deepfake video detection techniques attempt to identify the fake videos from real videos, these are now being subjected to adversarial attacks, thus undermining their efficacy. In this paper, we show that accuracy of deepfake detectors can be considerably improved by incorporating an adversarial learning step during model building. We use a recently proposed deep network architecture, namely VGG19, as deepfake detector supplemented with adversarial training using Iterative Fast Gradient Sign Method (I-FGSM). To further improve non-adversarial accuracy ensemble of models were used. Extensive experiments on a large deepfake video corpus under different white box adversarial attacks demonstrate significant adversarial robustness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call