Abstract

An important task in deep learning for face recognition is to use proper loss functions and optimization technique. Several loss functions have been proposed using stochastic gradient descent for this task. The main purpose of this work is to propose the strategy to use the Laplacian smoothing stochastic gradient descent with combination of multiplicative angular margin to enhance the performance of angularly discriminative features of angular margin softmax loss for face recognition. The model is trained on a most popular face recognition dataset CASIA-WebFace and it achieves the state-of-the-art performance on several academic benchmark datasets such as Labeled Face in the Wild (LFW), YouTube Faces (YTF), VGGFace1 and VGGFace2. Our method achieves a new record accuracy of 99.54% on LFW dataset. On YTF dataset it achieves 95.53% accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.