Abstract

Due to the COVID-19 pandemic, the necessity for a contactless biometric system able to recognize masked faces drew attention to the periocular region as a valuable biometric trait. However, periocular recognition remains challenging for deployments in the wild or in unconstrained environments where images are captured under non-ideal conditions with large variations in illumination, occlusion, pose, and resolution. These variations increase within-class variability and between-class similarity, which degrades the discriminative power of the features extracted from the periocular trait. Despite the remarkable success of convolutional neural network (CNN) training, CNN requires a huge volume of data, which is not available for periocular recognition. In addition, the focus is on reducing the loss between the actual class and the predicted class but not on learning the discriminative features. To address these problems, in this paper we used a pre-trained CNN model as a backbone and introduced an effective deep CNN periocular recognition model, called linear discriminant analysis CNN (LDA-CNN), where an LDA layer was incorporated after the last convolution layer of the backbone model. The LDA layer enforced the model to learn features so that the within-class variation was small, and the between-class separation was large. Finally, a new fully connected (FC) layer with softmax activation was added after the LDA layer, and it was fine-tuned in an end-to-end manner. Our proposed model was extensively evaluated using the following four benchmark unconstrained periocular datasets: UFPR, UBIRIS.v2, VISOB, and UBIPr. The experimental results indicated that LDA-CNN outperformed the state-of-the-art methods for periocular recognition in unconstrained environments. To interpret the performance, we visualized the discriminative power of the features extracted from different layers of the LDA-CNN model using the t-distributed Stochastic Neighboring Embedding (t-SNE) visualization technique. Moreover, we conducted cross-condition experiments (cross-light, cross-sensor, cross-eye, cross-pose, and cross-database) that proved the ability of the proposed model to generalize well to different unconstrained conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.