Abstract

In the last two decades, numerous methods have been developed to offer a formulation to the face recognition problem under scene-dependent conditions. However, these methods have not considered image quality degradations resulting from capture, processing, and transmission such as blur and occlusion due to packet loss, under the same scene variations. Although deep neural networks are achieving state-of-the-art results on face recognition, the existing networks are susceptible to quality distortions. In this work, the authors propose an augmented sparse representation classifier (SRC) framework to improve the performance of the conventional SRC in the presence of Gaussian blur, camera shake blur, and block occlusions, while preserving its robustness to scene-dependent variations. In their evaluation of the SRC framework, they present a feature sparsity concentration and classification index that is capable of assessing the quality of features in terms of recognition accuracy as well as class-based sparsity concentration. For this purpose, they consider three main types of features including image raw pixels, histogram of oriented gradients and deep learning visual geometry group (VGG) Face. The obtained performance results show that the proposed method outperforms state-of-the-art sparse-based and blur-invariant methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call