Abstract

AbstractThe deepfake technique replaces the face in a source video with a fake face which is generated using deep learning tools such as generative adversarial networks (GANs). Even the facial expression can be well synchronized, making it difficult to identify the fake videos. Using features from multiple domains has been proved effective in the literature. It is also known that the temporal information is particularly critical in detecting deepfake videos, since the face-swapping of a video is implemented frame by frame. In this paper, we argue that the temporal differences between authentic and fake videos are complex and can not be adequately depicted from a single time scale. To obtain a complete picture of the temporal deepfake traces, we design a detection model with a short-term feature extraction module and a long-term feature extraction module. The short-term module captures the gradient information of adjacent frames. which is incorporated with the frequency and spatial information to make a multi-domain feature set. The long-term module then reveals the artifacts from a longer period of context. The proposed algorithm is tested on several popular databases, namely FaceForensics++, DeepfakeDetection (DFD), TIMIT-DF and FFW. Experimental results have validated the effectiveness of our algorithm through improved detection performance compared with related works.KeywordsDeepfake video detectionMulti-domain featuresMulti-scale temporal featuresCross-dataset performance

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.