Abstract

Subspace learning is a matrix decomposition method. Some algorithms apply subspace learning to feature selection, but they ignore the local discriminative information contained in data. In this paper, we propose a new unsupervised feature selection algorithm to address this issue, which is called local discriminative based sparse subspace learning for feature selection (LDSSL). We first introduce a local discriminant model in our feature selection framework of subspace learning. This model preserves both the local discriminant structure and local geometric structure of the data, simultaneously. It can not only improve the discriminate ability of the algorithm, but also utilize the local geometric structure information contained in data. Local discriminant model is a linear model, which cannot deal with nonlinear data effectively. Therefore, we need to kernelize the local discriminant model to get a nonlinear version. We next introduce the L1-norm to constrain the feature selection matrix, and this can ensure the sparsity of the feature selection matrix and improve the algorithm's discriminate ability. Then we give the objective function, convergence proof and iterative update rules of the algorithm. We compare LDSSL with eight state-of-the-art algorithms on six datasets. The experimental results show that LDSSL is more effective than eight other feature selection algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call