Abstract

Large-scale multi-label learning (LMLL) annotates relevant labels for unseen data from a huge number of candidate labels. It is perceived that labels exhibit a long tail distribution in which a significant number of labels are tail labels. Most previous studies consider that the performance would benefit from incorporating tail labels. Nonetheless, it is not quantified how tail labels impact the performance. In this article, we disclose that whatever labels are randomly missing or misclassified, the impact of labels on commonly used LMLL evaluation metrics (Propensity Score Precision (PSP)@ k and Propensity Score nDCG (PSnDCG)@ k ) is directly related to the product of the label weights and the label frequencies. In particular, when labels share equal weights, tail labels impact much less than common labels due to the scarcity of relevant examples. Based on such observation, we propose to develop low-complexity LMLL methods with the goal of facilitating fast prediction time and compact model size by restraining less performance-influential labels. With the consideration that discarding labels may cause the loss of predictive capability, we further propose to preserve dominant model parameters for the less performance-influential labels. Experiments clearly justify that both the prediction time and the model size are significantly reduced without sacrificing much predictive performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call