The rapid development of generative image modeling poses security risks of spreading unreal visual information, even though those techniques make a lot of applications possible in positive aspects. To provide alerts and maintain a secure social environment, forgery detection has been an urgent and crucial solution to deal with this situation and try to avoid any negative effects, especially for human faces, owing to potential severe results when malicious creators spread disinformation widely. In spite of the success of recent works w.r.t. model design and feature engineering, detecting face forgery from novel image creation methods or data distributions remains unresolved, because well-trained models are typically not robust to the distribution shift during test-time. In this work, we aim to alleviate the sensitivity of an existing face forgery detector to new domains, and then boost real-world detection under unknown test situations. In specific, we leverage test examples, selected by uncertainty values, to fine-tune the model before making a final prediction. Therefore, it leads to a test-time training based approach for face forgery detection, that our framework incorporates an uncertainty-driven test sample selection with self-training to adapt a classifier onto target domains. To demonstrate the effectiveness of our framework and compare with previous methods, we conduct extensive experiments on public datasets, including FaceForensics++, Celeb-DF-v2, ForgeryNet and DFDC. Our results clearly show that the proposed framework successfully improves many state-of-the-art methods in terms of better overall performance as well as stronger robustness to novel data distributions.
Read full abstract