Abstract
Most of the time, when people observe, interact or speak to each other, they focus the attention on the ocular parts of the face. This daily life experience has a strong impact on the analysis of periocular facial regions. These facial regions may be exploited in order to identify individuals for several applications, including access control and services such as telebanking and electronic transactions. In this paper we suggest studying the efficiency of the periocular regions on gender and race prediction. Most researchers propose a local texture description based on LBP (Local Binary Pattern) and HoG (Histogram of Oriented Gradients) for the purpose of predicting gender. On the other hand, Deep learning techniques were proposed to predict the gender. However, this requires a huge labeled periocular data for gender which is not available. Also, the expressivity of gender and race can be decreased on the final representation of the Deep architectures comparing to the earlier stages. To overcome these points and for the aim of predicting gender and race, considering also the high impact of DCNNs (Deep Convolutional Neural Networks) techniques to solve several aspects in biometrics, we suggest a Deep architecture based on visual attention on the periocular part. The visual saliency extraction is based on primary layers’ activation by analyzing the feature-maps. We study how the visual attention-based features coupled to Deep Neural Networks can be used to discriminate between gender and race, hence extract a significant feature from periocular regions. Different pretrained architectures such as Alexnet and ResNet-50 were considered to extract visual saliency points or interest points. Several experiments were performed on periocular regions and a comparative study was conducted. The present results not only demonstrate the feasibility but also the robustness of the extracted interest points.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Visual Communication and Image Representation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.