AbstractIn a landscape increasingly populated by convincing yet deceptive multimedia content generated through generative adversarial networks, there exists a significant challenge for both human interpretation and machine learning algorithms. This study introduces a shallow learning technique specifically tailored for analyzing visual and auditory components in videos, targeting the lower face region. Our method is optimized for ultra-short video segments (200-600 ms) and employs wavelet scattering transforms for audio and discrete cosine transforms for video. Unlike many approaches, our method excels at these short durations and scales efficiently to longer segments. Experimental results demonstrate high accuracy, achieving 96.83% for 600 ms audio segments and 99.87% for whole video sequences on the FakeAVCeleb and DeepfakeTIMIT datasets. This approach is computationally efficient, making it suitable for real-world applications with constrained resources. The paper also explores the unique challenges of detecting deepfakes in ultra-short sequences and proposes a targeted evaluation strategy for these conditions.
Read full abstract