Abstract

Applications for facial recognition have eased the process of personal identification. However, there are increasing concerns about the performance of these systems against the challenges of presentation attacks, spoofing, and disguises. One of the reasons for the lack of a robustness of facial recognition algorithms in these challenges is the limited amount of suitable training data. This lack of training data can be addressed by creating a database with the subjects having several disguises, but this is an expensive process. Another approach is to use generative adversarial networks to synthesize facial images with the required disguise add-ons. In this paper, we present a synthetic disguised face database for the training and evaluation of robust facial recognition algorithms. Furthermore, we present a methodology for generating synthetic facial images for the desired disguise add-ons. Cycle-consistency loss is used to generate facial images with disguises, e.g., fake beards, makeup, and glasses, from normal face images. Additionally, an automated filtering scheme is presented for automated data filtering from the synthesized faces. Finally, facial recognition experiments are performed on the proposed synthetic data to show the efficacy of the proposed methodology and the presented database. Training on the proposed database achieves an improvement in the rank-1 recognition rate (68.3%), over a model trained on the original nondisguised face images.

Highlights

  • Academic Editor: Bo-Hao ChenFacial recognition (FR) has been a topic of interest for the last few decades

  • FR algorithms are trained on facial features, which are unique for each subject

  • We propose a methodology for disguised face synthesis and present a synthetic facial disguise database for the development of robust FR algorithms

Read more

Summary

Introduction

Academic Editor: Bo-Hao ChenFacial recognition (FR) has been a topic of interest for the last few decades. A controlled environment is defined as one in which the images are captured in the frontal pose, with good illumination, and a neutral expression, and in which the subject is not trying to avoid recognition. In circumstances such as a spoofing scenario, the subject might try to hide his/her identity by using a disguise. Such scenarios pose a challenging problem for applications of FR. FR algorithms are trained on facial features, which are unique for each subject. There can be different outcomes in such a scenario, such as the failure to recognize, recognizing a different identity, or the failure to detect the face altogether

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call