Abstract
At present, deep learning drives the rapid development of face recognition. However, in the unconstrained scenario, the change of facial posture has a great impact on face recognition. Moreover, the current model still has some shortcomings in accuracy and robustness. The existing research has formulated two methods to solve the above problems. One method is to model and train each pose separately. Then, a fusion decision will be made. The other method is to make “frontal” faces on the image or feature level and transform them into “frontal” face recognition. Based on the second idea, we propose a profile to the frontal revise mapping (PTFRM) module. This module realizes the revision of arbitrary poses on the feature level and transforms the multi-pose features into an approximate frontal representation to enhance the recognition ability of the existing recognition models. Finally, we evaluate the PTFRM on unconstrained face validation benchmark datasets such as Labeled Faces in the Wild (LFW), Celebrities in Frontal Profile (CFP), and IARPA Janus Benchmark A(IJB-A). Results show that the chosen method for this study achieves good performance.
Highlights
In recent years, the emergence of deep learning has achieved great success in the field of face recognition
Existing face recognition methods based on deep learning mainly include the following modules: image preprocessing, training a convolutional neural network (CNN) to extract features, face verification, and recognition
We propose a profile to the frontal revise mapping (PTFRM) module based on Residual Nets(ResNet)
Summary
The emergence of deep learning has achieved great success in the field of face recognition. In unconstrained scenes, factors such as changes in the illumination, occlusion, pose, and expression still largely interfere with the accuracy and robustness of face recognition. Given the pose variations, learning the feature representation with geometric invariance to large pose variations directly is challenging. Existing face recognition methods based on deep learning mainly include the following modules: image preprocessing, training a convolutional neural network (CNN) to extract features, face verification, and recognition. Image preprocessing includes face detection, alignment, normalization, and random flipping. It unifies the facial image into a fixed size as the input of the CNN network. The target of face verification and recognition is achieved by comparing the voting score obtained by the similarity measure or the Euclidean distance measure with the threshold
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have