Abstract

In real-world application scenarios, multi-label learning (MLL) datasets often contain some irrelevant noisy labels, which degrades the performance of traditional multi-label learning models. In order to deal with this problem, partial multi-label learning (PML) is proposed, in which each instance is associated with a candidate label set, which includes multiple relevant ground-truth labels and some irrelevant noisy labels. The common strategy to deal with this problem is disambiguating the candidate label set, but the co-occurrence of noisy labels and ground-truth labels makes the disambiguation technique susceptible to error. In this paper, a novel disambiguation-free PML approach named PML-TT is proposed. Specifically, by adapting the tri-training framework, mutual cooperation and iteration between classifiers are used to correct noisy labels and improve the performance of the learning model. Moreover, the three-way decision is adapted to solve the conflict problem of the base classifier and obtain more useful training samples. In addition, the precise supervisory information of the non-candidate labels is exploited to make the predictions of the base classifier more accurate. Finally, experimental results on both synthetic and real-world PML datasets show that the proposed PML-TT approach can effectively reduce the negative influence of noisy labels and learn a robust model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call