Abstract

AbstractThe rapid advancement of deep Convolutional Neural Networks (CNNs) has led to remarkable progress in computer vision, contributing to the development of numerous face verification architectures. However, the inherent complexity of these architectures, often characterized by millions of parameters and substantial computational demands, presents significant challenges for deployment on resource-constrained devices. To address these challenges, we introduce RobFaceNet, a robust and efficient CNN designed explicitly for face recognition (FR). The proposed RobFaceNet optimizes accuracy while preserving computational efficiency, a balance achieved by incorporating multiple features and attention mechanisms. These features include both low-level and high-level attributes extracted from input face images and aggregated from multiple levels. Additionally, the model incorporates a newly developed bottleneck that integrates both channel and spatial attention mechanisms. The combination of multiple features and attention mechanisms enables the network to capture more significant facial features from the images, thereby enhancing its robustness and the quality of facial feature extraction. Experimental results across state-of-the-art FR datasets demonstrate that our RobFaceNet achieves higher recognition performance. For instance, RobFaceNet achieves 95.95% and 92.23% on the CA-LFW and CP-LFW datasets, respectively, compared to 95.45% and 92.08% for very deep ArcFace model. Meanwhile, RobFaceNet exhibits a more lightweight model complexity. In terms of computation cost, RobFaceNet has 337M Floating Point Operations Per Second (FLOPs) compared to ArcFace’s 24211M, with only 3% of the parameters. Consequently, RobFaceNet is well-suited for deployment across various platforms, including robots, embedded systems, and mobile devices.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.