Abstract

Multilabel learning handles the problem that instances are associated with multiple labels. In practical applications, multilabel learning often suffers from imperfect training data. Typically, labels may be noisy or features may be corrupted, or both. Most existing multilabel learning models only consider either label noise or feature noise. Theoretically, ignoring any kind of noise in the learning process may lead to an unreasonable model, and thus affect the multilabel learning performance. In this paper, we propose a robust multilabel learning model, Tri-structured-Sparsity induced Joint Feature Selection and Classification (TriS-JFSC), to handle the data with hybrid noise. Specifically, the proposed TriS-JFSC model employs the tri-structured-sparsity regularization bridged with a label enhancement matrix to simultaneously smooth the feature and label noise, and embed a feature selection scheme that can simultaneously learn label-shared features and label-specific ones to boost the multilabel learning performance. Furthermore, by employing Alternating Direction Method of Multipliers (ADMM) method, a simple but efficient optimization algorithm is designed to solve the proposed TriS-JFSC model. Finally, the extensive experiments performed on several benchmark datasets demonstrate that our TriS-JFSC model outperforms other state-of-the-art learning methods.

Highlights

  • Multilabel learning deals with the problem that each instance is assigned with multiple labels

  • We introduce an adaptive feature selection mechanism to boost the multilabel learning performance, which can extract the most discriminative features for each label

  • We propose a robust Tri-structured-Sparsity induced Joint Feature Selection and Classification (TriS-joint feature selection and classification (JFSC)) model to address the multilabel learning problem on the imperfect training data

Read more

Summary

Introduction

Multilabel learning deals with the problem that each instance is assigned with multiple labels. Most of proposed multilabel learning methods lack consideration of data noise, which leads to degraded performance in practical applications when encountering. The fact that the data contains noise is very common in practical applications. Ignoring this problem will reduce training performance and lead to an unreasonable model. Since observed values tend to be affected, features may contain noise. Given this consideration, some methods have been proposed to deal with feature noise [11]–[13]

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call