Abstract

Abstract This study offers a novel face recognition and classification method based on classifiers that use statistical local features. The use of ResNet has generated growing interest in a variety of areas of image processing and computer vision in recent years and demonstrated its usefulness in several applications, especially for facial image analysis, which includes tasks as varied as face detection, face recognition, facial expression analysis, demographic classification, etc. This paper is divided into two steps i.e. face recognition and classification. The first step in face recognition is automatic data cleansing which is done with the help of Multi-Task Cascaded Convolutional Neural Networks (MTCNNs) and face.evoLVe, followed by parameter changes in MTCNN to prevent dirty data. The authors next trained two models: Inception-ResNetV1, which had pre-trained weights, and Altered-ResNet (A-ResNet), which used Conv2d layers in ResNet for feature extraction and pooling and softmax layers for classifications. The authors use the best optimizer after comparing a number of them during the training phase, along with various combinations of batch and epoch. A-ResNet, the top model overall, detects 86/104 Labelled Faces in the Wild (LFW) dataset images in 0.50 seconds. The proposed approach was evaluated and received an accuracy of 91.7%. Along with this, the system achieved a training accuracy of 98.53% and a testing accuracy of 99.15% for masked face recognition. The proposed method exhibits competitive outcomes when measured against other cutting-edge algorithms and models. Finally, when it comes to why the suggested model is superior to ResNet, it may be because the A-ResNet is simpler thus it can perform at its best with little data, whereas deeper networks require higher data size.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.