Abstract

Partial multi-label (PML) learning refers to the modeling of prediction patterns from data annotated with partially correct labels. Label embedding that finds a compact representation between the input and output spaces begets a family of efficient multi-label classification algorithms. Nevertheless, most existing label embedding methods fail to capture an accurate subspace when a large portion of data points are completely unlabeled, which can always lead to performance degradation. There are three specific problems to be addressed with semi-supervised embedding methods for multi-label (ML) learning: (1) The spaces of features and labels need to be linked together for co-training; (2) The shared latent patterns underlying in the two spaces need to be explored; (3) The embedded subspace should be capability of handling the erroneously labeled data points and absolutely unlabeled data points simultaneously. To this end, we formulate a PML learning framework via a compact and shared label embedding in semi-supervised setting. In particular, the tagging information and the input discriminative information are linked together into a shared label basis subspace by minimizing the reconfiguration loss from the shared subspace to the label space and simultaneously maximizing the dependence of the shared subspace and the feature space. Moreover, the feature and label manifolds provide auxiliaries for the framework construction. Theoretical and applied study demonstrate that the proposed approach is effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call