Abstract

Partial Multi-Label learning (PML) aims to learn from training data where each example is associated with a set of candidate labels, among which only a subset of them is correct. The major challenge of PML lies in that the training procedure is prone to be misled by the label noise. To address this problem, nearly all existing PML methods focus on solely label disambiguation, i.e., dislodging the noisy labels from the candidate label set and then utilizing the remaining credible labels for model induction. However, these remaining “credible” labels may be incorrectly identified, which thereby would have a huge adverse impact on the subsequent model induction. In this paper, in contrary to the above label disambiguation strategy, we propose a simple yet effective Noisy lAbel Tolerated pArtial multi-label Learning (NATAL) method, where the labeling information is considered to be precise while the feature information is assumed to be missing. Using our proposed method, the task of PML can be re-interpreted as a Feature Completion problem, and the desired prediction model can be directly induced from the completed feature together with all candidate labels. Extensive experimental results on various data sets clearly demonstrate the effectiveness of our proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.