Abstract

Face recognition (FR) systems have demonstrated reliable verification performance, suggesting suitability for real-world applications ranging from photo tagging in social media to automated border control (ABC). In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient, and the system should also withstand potential kinds of attacks. Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions. In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them. Further, we propose a taxonomy of existing attack and defense methods based on different criteria. We compare attack methods on the orientation, evaluation process, and attributes, and defense approaches on the category. Finally, we discuss the challenges and potential research direction.

Highlights

  • We review recent studies on adversarial example generation approaches on Face recognition (FR) systems, present an illustrative taxonomy of the corresponding methods according to their orientation, and compare these approaches on orientation, evaluation process, and attributes

  • 7) UNIVERSAL ADVERSARIAL PERTURBATIONS In contrast to their DeepFool method that computes imagespecific perturbations, Moosavi-Dezfooli et al [67] proposed their newer algorithm to generate image-agnostic Universal Adversarial Perturbations to fool a network on any image successfully

  • ADVERSARIAL EXAMPLE GENERATION AGAINST FACE RECOGNITION we review adversarial examples generated against FR systems

Read more

Summary

BACKGROUNDS

We briefly introduce basic FR systems and elaborate on incorporated models in the era of deep learning. FaceNet [41] and VGG-Face [2] deep-learning-based models were introduced, which were designed to train popular GoogleNet [42] and VGGNet [43] over the large-scale face datasets, respectively These models fine-tuned the networks via a triplet loss function and implemented it on face patches created by an online triplet mining method. 7) UNIVERSAL ADVERSARIAL PERTURBATIONS In contrast to their DeepFool method that computes imagespecific perturbations, Moosavi-Dezfooli et al [67] proposed their newer algorithm to generate image-agnostic Universal Adversarial Perturbations to fool a network on any image successfully They attempted to find a universal perturbation that satisfies the following constraint: PP FF(xx) ≠ FF(xx + nn) ≥ δδ (9). Papernot et al [70] introduced the variant of the procedure using the knowledge of the network to improve its robustness

ADVERSARIAL EXAMPLE GENERATION AGAINST FACE RECOGNITION
DEFENSE AGAINST ADVERSARIAL EXAMPLES
Findings
CHALLENGES AND DISCUSSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.