Abstract

Due to the development of facial manipulation technologies, the generated deepfake videos cause a severe trust crisis in society. Existing methods prove that effective extraction of the artifacts introduced during the forgery process is essential for deepfake detection. However, since the features extracted by supervised binary classification contain a lot of artifact-irrelevant information, existing algorithms suffer severe performance degradation in the case of the mismatch between training and testing datasets. To overcome this issue, we propose an Artifacts-Disentangled Adversarial Learning (ADAL) framework to achieve accurate deepfake detection by disentangling the artifacts from irrelevant information. Furthermore, the proposed algorithm provides visual evidence by effectively estimating artifacts. Specifically, Multi-scale Feature Separator (MFS) in the disentanglement generator is designed to precisely transmit the artifact features and optimize the connection between the encoder and decoder. In addition, we design an Artifacts Cycle Consistency Loss (ACCL) which uses the disentangled artifacts to construct new samples and enables pixel-level supervised training for the generator to estimate more accurate artifacts. The symmetric discriminators are paralleled to differentiate the constructed samples from the original images in both fake and real domains, making the adversarial training process more stable. Extensive experiments on existing benchmarks demonstrate that the proposed method outperforms the state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call