Abstract

Feature selection can alleviate the problem of the curse of dimensionality by selecting more discriminative features, which plays an important role in multi-label learning. Recently, embedded feature selection methods have received increasing attentions. However, most existing methods learn the low-dimensional embeddings under the guidance of the local structure between the original instance pairs, thereby ignoring the high-order structure between instances and being sensitive to noise in the original features. To address these issues, we propose a feature selection method named discriminative multi-label feature selection with adaptive graph diffusion (MFS-AGD). Specifically, we first construct a graph embedding learning framework equipped with adaptive graph diffusion to uncover a latent subspace that preserves the higher-order structure information between four tuples. Then, the Hilbert–Schmidt independence criterion (HSIC) is incorporated into the embedding learning framework to ensure the maximum dependency between the latent representation and labels. Benefiting from the interactive optimization of the feature selection matrix, latent representation and similarity graph, the selected features can accurately explore the higher-order structural and supervised information of data. By further considering the correlation between labels, MFS-AG is extended to a more discriminative version,i.e., LMFS-AG. Extensive experimental results on various benchmark data sets validate the advantages of the proposed MFS-AGD and LMFS-AGD methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call