Abstract

Practically anyone can now generate a realistic looking deepfake video. It is clear that the online prevalence of such fake videos will erode the societal trust in video evidence even further. To counter the looming threat, many methods to detect deepfakes were recently proposed by the research community. However, it is still unclear how realistic deep-fake videos are for an average person and whether the algorithms are significantly better than humans at detecting them. Therefore, this paper, presents a subjective study, which, using 60 naïve subjects, evaluates how hard it is for humans to see if a video is a deepfake or not. For the study, 120 videos (60 deepfakes and 60 originals) were manually selected from the Facebook database used in Kaggle’s Deepfake Detection Challenge 2020. The results of the subjective evaluation were compared with two state of the art deepfake detection methods, based on Xception and EfficientNet (B4 variant) neural network models pre-trained on two other public databases: Google and Jiqsaw subset from FaceForensics++ and Celeb-DF v2 dataset. The experiments demonstrate that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes. Specifically, algorithms struggle to detect the deepfake videos that humans find to be very easy to spot.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call