Abstract

For sparse representation or sparse coding based image classification, the dictionary, which is required to faithfully and robustly represent query images, plays an important role on its success. Learning dictionaries from the training data for sparse coding has shown state-of-the-art results in image classification and face recognition. However, for face recognition, conventional dictionary learning methods cannot well learn a reliable and robust dictionary due to suffering from the small-sample-size problem. The other significant issue is that current dictionary learning do not completely cover the important components of signal representation (e.g., commonality, particularity, and disturbance), which limit their performance. In order to solve the above issues, in this paper, we propose a novel robust, discriminative and comprehensive dictionary learning (RDCDL) method, in which a robust dictionary is learned from comprehensive training sample diversities generated by extracting and generating facial variations. Especially, to completely represent the commonality, particularity and disturbance, class-shared, class-specific and disturbance dictionary atoms are learned to represent the data from different classes. Discriminative regularizations on the dictionary and the representation coefficients are used to exploit discriminative information, which effectively improves the classification capability of the dictionary. The proposed RDCDL method is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art dictionary learning methods for face recognition.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.