Abstract

Partial multi-label learning (PML) addresses problems where each instance is assigned a candidate label set and only a subset of these candidate labels is correct. The major challenge of PML is that the training procedure can be easily misguided by noisy labels. Current studies on PML have revealed two significant drawbacks. First, most of them do not sufficiently explore complex label correlations, which could improve the effectiveness of label disambiguation. Second, PML models heavily rely on prior assumptions, limiting their applicability to specific scenarios. In this work, we propose a novel method of PML based on the Encoder-Decoder Framework (PML-ED) to address the drawbacks. PML-ED initially achieves the distribution of label probability through a KNN label attention mechanism. It then adopts Conditional Layer Normalization (CLN) to extract the high-order label correlation and relaxes the prior assumption of label noise by introducing a universal Encoder-Decoder framework. This approach makes PML-ED not only more efficient compared to the state-of-the-art methods, but also capable of handling the data with large noisy labels across different domains. Experimental results on 28 benchmark datasets demonstrate that the proposed PML-ED model, when benchmarked against nine leading-edge PML algorithms, achieves the highest average ranking across five evaluation criteria.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call