Abstract

Recently, sparse subspace learning (SSL) has been widely focused by researchers. SSL methods aim to project samples into a low-dimensional subspace which can well maintain sparse correlations of dataset. However, most SSL methods utilize sparse representation (SR) which constructs sparse correlations without label information. Therefore, labels can’t be fully utilized to improve discriminative abilities of SSL methods. In order to overcome this drawback, this paper proposed a novel method called semi-supervised sparsity preserving projection (SSPP). SSPP first combines label information with SR to construct sparse correlations between samples. Some wrong correlations are avoided due to the employment of labels. Then, in order to further improve discriminative abilities of SSPP, large-margin criterion is adopted. Various experiments show the excellent performance of SSPP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call