Abstract

We present a novel classification technique based on sparse representation. The main idea of sparse representation for classification is the assumption that the training samples, or atoms, for a particular class form a linear basis for any new test sample that belongs to that class. Currently, most of the methods for sparse representation classification do not apply constraints to the coefficients that form the linear combination of the atoms, which leads to coefficients that can be positive or negative. In addition, all the training samples in the dictionary are treated equally. In this paper, we impose non-negative constraint on the components of the coefficient vector to ensure that the coefficient vector represents the contributions of the training samples towards the query, which is more natural for classification purposes. We also use the mutual information between the query sample and each of the training samples to obtain a weight for each of the atoms in the dictionary. These weights have the effect of reducing the search space and speeding the convergence of the algorithm in finding the coefficient vector. Experiments conducted on the Extended Yale B database for face recognition and on the University of Notre Dame (UND) database for ear recognition show that the proposed non-negative weighted sparse representation obtained by smoothed l0 norm outperforms other state-of-the-art classifiers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.