Abstract

Many fundamental problems in machine learning require some form of dimensionality reduction. To this end, two different strategies were used: Manifold Learning and Feature Selection. Manifold learning (or data embedding) attempts to compute a subspace from original data by feature recombination/transformation. Feature selection aims to select the most relevant features in the original space. In this paper, we propose a novel cooperative Manifold learning-Feature selection that goes beyond the simple concatenation of these two modules. Our basic idea is to transform a given shallow embedding to a deep variant by computing a cascade of embeddings in which each embedding undergoes feature selection and elimination. We use filter approaches in order to efficiently select irrelevant features at any stage of the process. For a case study, our proposed framework was used with two typical linear embedding algorithms: Local Discriminant Embedding (LDE) (a supervised technique) and Locality Preserving Projections (LPP) (unsupervised technique) on four challenging face databases and it has been conveniently compared with other cooperative schemes. Moreover, a comparison with several state-of-the-art manifold learning methods is provided. As it is exhibited by our experimental study, the proposed framework can achieve superior learning performance with respect to classic cooperative schemes and to many competing manifold learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call