Abstract
With the advancement of information technology and societal growth, social security has become more important than ever. Face recognition, as compared to other traditional recognition methods like fingerprint recognition, palm recognition, etc, has the benefit that it is contact less, and now it is becoming one among the most prominent technologies in development. Although there are numerous recognition systems that use DNNs in the field of facial expression recognition, their accuracy and practicality are still insufficient for real-world applications. A facial recognition approach based on Resnet 152 v2 has been proposed in this work. In this paper, a residual learning approach is presented to make the training of networks that are far deeper than previously employed networks easier. The proposed method, employs the AT&T face dataset, and supposing that normalization and segmentation are complete, we concentrate on the subtask of person verification and recognition, demonstrating performance using a testing database comprising illumination, pose, expression and occlusion variations. SoftMax is the activation function that has been used, which adjusts the output sum up to one allowing it to be understood as probabilities. Then, the model would generate a judgment depending on which option has a strong likelihood. This system employs Adam as an optimizer to control the learning rate through training and categorical cross entropy as its loss function. The proposed approach has a 97 percent face recognition accuracy on AT&T dataset, showing its efficacy after a significant number of analyses and experimental verification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IOP Conference Series: Materials Science and Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.