In order to minimize additional computational costs in detecting forged videos, and enhance detection accuracy, this paper employs dynamic facial expression sequences as key sequences, replacing original video sequences as inputs for the detection model. A spatio-temporal dual-branch detection network is designed based on the visual Transformer architecture. Specifically, this process involves three steps. Firstly, dynamic facial expression sequences are localized as key sequences using optical flow difference algorithms. Subsequently, the spatial branch network employs the focal self-attention mechanism to focus on dynamic features of expression-relevant regions and uses Factorization Machines to facilitate feature interaction among multiple key sequences. Meanwhile, the temporal branch network concentrates on learning the temporal inconsistency of optical flow differences between adjacent frames. Finally, a binary classification linear SVM combines the Softmax values from the two branch networks to provide the ultimate detection outcome. Experimental results on the Faceforensics++ dataset demonstrate: (a) replacing whole video sequences with facial expression key sequences effectively reduces training and detection time by nearly 80% and 90%, respectively; (b) compared to state-of-the-art methods involving random sequence/frame extraction and key frame extraction based on video compression techniques, the proposed approach in this paper presents a more competitive detection accuracy.
Read full abstract