Abstract

Feature selection is a key technique to tackle the curse of dimensionality in multi-label learning. Lots of embedded multi-label feature selection methods have been developed. However, they face challenges in identifying and excluding redundant features. To address these issues, this paper proposes a multi-label feature selection method that combines robust structural learning and discriminative label regularization. The proposed method starts from the feature space rather than data space, motivated by the principle that redundant features have high similarity or strong correlation. To exclude redundant features, a regularization on the feature selection matrix is designed by combining ℓ2,1-norm penalty with inner products of feature weight vectors. This regularization can help to learn a robust structure in the feature selection matrix. Meanwhile, both of the similarity and dissimilarity of labels of instances are involved in exploring discriminative label correlations. Extensive experiments verified the effectiveness of the proposed model for feature selection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call