Abstract

In this paper, we discuss the sparse codes auto-extractor based classification. A joint label consistent embedding and dictionary learning approach is proposed for delivering a linear sparse codes auto-extractor and a multi-class classifier by simultaneously minimizing the sparse reconstruction, discriminative sparse-code, code approximation and classification errors. The auto-extractor is characterized with a projection that bridges signals with sparse codes by learning special features from input signals for characterizing sparse codes. The classifier is trained based on extracted sparse codes directly. In our setting, the performance of the classifier depends on the discriminability of sparse codes, and the representation power of the extractor depends on the discriminability of input sparse codes, so we incorporate label information into the dictionary learning to enhance the discriminability of sparse codes. So, for inductive classification, our model forms an integration process from test signals to sparse codes and finally to assigned labels, which is essentially different from existing sparse coding based approaches that involve an extra sparse reconstruction with the trained dictionary for each test signal. Remarkable results are obtained by our model compared with other state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call