Abstract

This work studies multi-label learning (MLL), where each instance is associated with a subset of positive labels. For each instance, a good multi-label predictor should encourage the predicted positive labels to be close to its ground-truth positive ones. In this work, we propose a new loss, named Groupwise Ranking LosS (GRLS) for multi-label learning. Minimizing GRLS encourages the predicted relevancy scores of the ground-truth positive labels to be higher than that of the negative ones. More importantly, its time complexity is linear with respect to the number of candidate labels, rather than square complexity for some pairwise ranking based methods. We further analyze GRLS in the perspective of label-wise margin and suggest that multi-label predictor is label-wise effective if and only if GRLS is optimal. We also analyze the relations between GRLS and some widely used loss functions for MLL. Finally, we apply GRLS to multi-label learning, and extensive experiments on several benchmark multi-label databases demonstrate the competitive performance of the proposed method to state-of-the-art methods.

Highlights

  • Multi-label learning (MLL) is an important task in machine learning, where each instance is associated with multiple labels reflecting its multitude of semantic relevance [1]–[4]

  • We propose a new loss, named Groupwise Ranking LosS (GRLS) for multi-label learning that implements the top-k label principle

  • (1) We propose a new type of loss for multi-label learning, named Groupwise Ranking LosS (GRLS) that naturally encourages the predicted relevancy scores of ground-truth positive labels to rank higher than that of negative ones

Read more

Summary

INTRODUCTION

Multi-label learning (MLL) is an important task in machine learning, where each instance is associated with multiple labels reflecting its multitude of semantic relevance [1]–[4]. Such an algorithm is to examine if the predicted relevancy scores of ground-truth positive labels rank at the top-k positions among m predicted scores for each instance, and we refer to this as the top-k label principle. Fan et al.: Groupwise Ranking Loss for Multi-Label Learning to construct (positive, negative) label pairs for each instance, the number of which could be up to O(m2) We propose a new loss, named Groupwise Ranking LosS (GRLS) for multi-label learning that implements the top-k label principle. (1) We propose a new type of loss for multi-label learning, named Groupwise Ranking LosS (GRLS) that naturally encourages the predicted relevancy scores of ground-truth positive labels to rank higher than that of negative ones. GRLS does not require pairwise comparisons and obtains better scalability on largescale multi-label datasets. (3) Experimental results on several benchmark multi-label database verify the superior performance of the proposed GRLS, over many widely used loss functions in multi-label learning

RELATED WORK
OPTIMIZATION OF GRLS
GRLS FOR MULIT-LABEL LEARNING
EVALUATION METRICS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call