Abstract

Partial multilabel learning (PML) aims to learn from training data, where each instance is associated with a set of candidate labels, among which only a part is correct. The common strategy to deal with such a problem is disambiguation, that is, identifying the ground-truth labels from the given candidate labels. However, the existing PML approaches always focus on leveraging the instance relationship to disambiguate the given noisy label space, while the potentially useful information in label space is not effectively explored. Meanwhile, the existence of noise and outliers in training data also makes the disambiguation operation less reliable, which inevitably decreases the robustness of the learned model. In this article, we propose a prior label knowledge regularized self-representation PML approach, called PAKS, where the self-representation scheme and prior label knowledge are jointly incorporated into a unified framework. Specifically, we introduce a self-representation model with a low-rank constraint, which aims to learn the subspace representations of distinct instances and explore the high-order underlying correlation among different instances. Meanwhile, we incorporate prior label knowledge into the above self-representation model, where the prior label knowledge is regarded as the complement of features to obtain an accurate self-representation matrix. The core of PAKS is to take advantage of the data membership preference, which is derived from the prior label knowledge, to purify the discovered membership of the data and accordingly obtain more representative feature subspace for model induction. Enormous experiments on both synthetic and real-world datasets show that our proposed approach can achieve superior or comparable performance to state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.