Abstract

Generating falsified faces by artificial intelligence, widely known as DeepFake, has attracted attention worldwide since 2017. Given the potential threat brought by this novel technique, forensics researchers dedicated themselves to detect the video forgery. Except for exposing falsified faces, there could be extended research directions for DeepFake such as anti-forensics. It can disclose the vulnerability of current DeepFake forensics methods. Besides, it could also enable DeepFake videos as tactical weapons if the falsified faces are more subtle to be detected. In this paper, we propose a GAN model to behave as an anti-forensics tool. It features a novel architecture with additional supervising modules for enhancing image visual quality. Besides, a loss function is designed to improve the efficiency of the proposed model. After experimental evaluations, we show that the DeepFake forensics detectors are susceptible to attacks launched by the proposed method. Besides, the proposed method can efficiently produce anti-forensics videos in satisfying visual quality without noticeable artifacts. Compared with the other anti-forensics approaches, this is tremendous progress achieved for DeepFake anti-forensics. The attack launched by our proposed method can be truly regarded as DeepFake anti-forensics as it can fool detecting algorithms and human eyes simultaneously.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call