AbstractIn recent years, face forgery detection has gained significant attention, resulting in considerable advancements. However, most existing methods rely on CNNs to extract artefacts from the spatial domain, overlooking the pervasive frequency‐domain artefacts present in deepfake content, which poses challenges in achieving robust and generalized detection. To address these issues, we propose the dual‐stream frequency—spatial fusion network is proposed for deepfake detection. The dual‐stream frequency‐spatial fusion network consists of three components: the spatial forgery feature extraction module, the frequency forgery feature extraction module, and the spatial–frequency feature fusion module. The spatial forgery feature extraction module employs spatial‐channel attention to extract spatial domain features, targeting artefacts in the spatial domain. The frequency forgery feature extraction module leverages the focused linear attention to detect frequency domain anomalies in internal regions, enabling the identification of generated content. The spatial–frequency feature fusion module then fuses forgery features extracted from both the spatial and frequency domains, facilitating accurate detection of splicing artefacts and internally generated forgeries. This approach enhances the model's ability to more accurately capture forgery characteristics. Extensive experiments on several widely‐used benchmarks demonstrate that our carefully designed network exhibits superior generalization and robustness, significantly improving deepfake detection performance.
Read full abstract