Abstract
The extended sparse representation classification (ESRC) is one of the benchmark classification algorithms in the field of single sample face recognition (SSFR). However, when there are many single training samples, the execution time of ESRC cannot be acceptable in real face recognition systems. We assume the similarity principle of sparse representation under valid SSFR as that, if the test image is more similar to certain single training sample, the corresponding sparse coefficient of this single training sample may be larger, and the representation residual of this single training sample may be smaller. Based on this assumption, we propose the fast ESRC method to tackle many single training samples problem. Firstly, we propose the positive sparse coefficient based ESRC (PESRC) that selects to compute representation residuals of single training samples whose sparse coefficients of ESRC are positive. Then, we propose the statistical analysis of the sparse coefficient ratio, which is used to develop the large positive sparse coefficient based ESRC (LESRC) that calculates representation residuals of the single training samples corresponding to large positive sparse coefficients of PESRC. Finally, the experiment results on Extended Yale B, AR, CMU PIE and VGGFace2 face databases indicate that the proposed PESRC and LESRC can significantly improve the computation efficiency of ESRC. On our platform, the execution time of recognizing only one test image for ESRC or VGG + ESRC is over 130 s (the execution time of ESRC and VGG + ESRC are 562.71 s and 135.02 s) under 9125 single training samples, whereas the execution times of PESRC, LESRC, VGG + PESRC, and VGG + LESRC are 2.20s, 0.23 s, 0.71 s, and 0.03 s respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.