Abstract

Face antispoofing detection aims to identify whether the user’s face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, the purpose of this paper is to explore the vulnerability of existing face antispoofing models, especially multimodality models, when resisting various types of attacks. In this paper, we firstly study the resistance ability of multimodality models when they encounter white-box attacks and black-box attacks from the perspective of adversarial examples. Then, we propose a new method that combines mixed adversarial training and differentiable high-frequency suppression modules to effectively improve model safety. Experimental results show that the accuracy of the multimodality face antispoofing model is reduced from over 90% to about 10% when it is attacked by adversarial examples. But, after applying the proposed defence method, the model can still maintain more than 90% accuracy on original examples, and the accuracy of the model can reach more than 80% on attack examples.

Highlights

  • Facial recognition has been gradually integrated into our daily life nowadays

  • Previous research has found that facial recognition systems are spoofed by various face presentation attacks (PAs) [1,2,3]. ese attacks include print attacks, video replay attacks, and 3D mask attacks

  • The above research is only conducted in the convolutional neural network with a relatively simple structure and did not involve multimodality detection models that have achieved excellent performance in recent years; the generation of perturbed images only involved RGB images and did not involve Depth and IR images. erefore, this paper focuses on the safety of multimodality models in the face of adversarial attacks, and the contributions of this paper are summarized as follows: (1) We select the advanced models of single-modality and multimodality face antispoofing and verify the vulnerability of the models in the face of white-box and black-box attacks on RGB, Depth, and IR images

Read more

Summary

Introduction

Facial recognition has been gradually integrated into our daily life nowadays. Its application in mobile payment, security monitoring, and other fields is becoming more and more extensive. Alipay and other software launched the function of face scan payment, and users do not need to carry mobile devices such as cell phones, as long as the face recognition system that detects the real face bound to the account can realize the face payment. As shown, are where the face image of a legitimate user is printed on paper to attack the facial recognition systems. In 2018, there was a case where a gang used software to create 3D avatars of relevant citizens’ avatars, so as to pass Alipay’s facial recognition authentication and register the personal information of the aforementioned users to obtain rewards for new users. erefore, a safe face antispoofing detection model to distinguish whether there is a live face is research-wise significant and valuable

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.