Abstract

Virtually all existing label propagation (LP) approaches estimate the unknown labels of points from the original input space directly, but the transductive results are “shallow” and usually contain unfavorable mixed signs that may decrease the performance of both transductive models and out-of-sample extensions. To solve this issue, we propose a Projective Label Propagation (ProjLP) framework by label embedding, which can deliver more discriminating “deep” labels of samples to enhance representation and classification. ProjLP is originally proposed to improve the transductive prediction results by simultaneously learning a robust projection to remove the unfavorable mixed signs and convert the shallow labels into the discriminating deep ones. By including a regressive reconstruction loss term to correlate the extracted discriminative features with the deep labels, the out-of-sample extension of ProjLP is also presented by delivering a linear neighborhood preserving projection classifier, where the neighborhood preserving power is enabled by sharing the same graph weights in the original embedded deep label space and the approximated label space. Thus, the deep label of each new data can be obtained by embedding it directly onto the classifier. We also show the inclusion method by reconstructing the deep label of new data from the deep labels of its neighbors. To show the deep property of embedded “deep” labels, a multilayer network architecture of our model is illustrated. Simulations on artificial and real-world datasets demonstrate the validity of our techniques for representation and classification, compared with other state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call