The goal of partial multi-label learning is to induce a multi-label classifier from partial multi-label data where each instance is annotated with a number of candidate labels but only a subset of them are valid. Many of the existing studies either fail to fully utilize instance and label correlations to eliminate noisy labels or build an oversimplified multi-label classifier, both of which are unfavorable for the improvement of generalization performance. In this article, we put forward a novel model named Pml-ilc to learn a multi-label classifier from partial multi-label data. Specifically, Pml-ilc first encodes instances and labels into a compact semantic space and takes full advantage of instance and label correlations to eliminate noisy labels. Then, it induces a linear mapping from the feature space to the label space while exploiting label-specific features and instance correlations to facilitate the multi-label classifier learning process. Finally, the above two steps are combined into a joint optimization problem and an efficient alternating optimization procedure is developed to find a satisfactory solution. Extensive experiments show that Pml-ilc achieves superior performance on both real-world and synthetic partial multi-label datasets in terms of different evaluation metrics.
Read full abstract