Abstract

Facial manipulation enables facial expressions to be tampered with or facial identities to be replaced in videos. The fake videos are so realistic that they are even difficult for human eyes to distinguish. This poses a great threat to social and public information security. A number of facial manipulation detectors have been proposed to address this threat. However, previous studies have shown that the accuracy of these detectors is sensitive to adversarial examples. The existing defense methods are very limited in terms of applicable scenes and defense effects. This paper proposes a new defense strategy for facial manipulation detectors, which combines a passive defense method, bilateral filtering, and a proactive defense method, joint adversarial training, to mitigate the vulnerability of facial manipulation detectors against adversarial examples. The bilateral filtering method is applied in the preprocessing stage of the model without any modification to denoise the input adversarial examples. The joint adversarial training starts from the training stage of the model, which mixes various adversarial examples and original examples to train the model. The introduction of joint adversarial training can train a model that defends against multiple adversarial attacks. The experimental results show that the proposed defense strategy positively helps facial manipulation detectors counter adversarial examples.

Highlights

  • Facial manipulation refers to swapping the target face with the source face, containing both forms of identity exchange and expression exchange

  • To address the above problems, this paper proposes a new defense strategy for facial manipulation detectors. is strategy designs effective methods to defend against adversarial example attacks from passive and proactive defenses

  • We will show the defensive performance of the bilateral filtering method and joint adversarial training method in resisting white- and black-box attacks, respectively, from the perspective of passive and proactive defenses

Read more

Summary

Introduction

Facial manipulation refers to swapping the target face with the source face, containing both forms of identity exchange and expression exchange. To cope with the threat posed by facial manipulation videos, a number of facial manipulation detectors have been proposed. One is based on the manual feature extraction [3,4,5], and the other is on various deep neural networks [6,7,8,9,10]. Compared with traditional feature extraction methods, deep neural network-based methods generally have better detection performance. Existing deep neural network-based facial manipulation detection models [11,12,13] are highly vulnerable to adversarial attacks. The deep neural network models of facial manipulation detection have security vulnerabilities.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.