Abstract

Images are often represented as vectors with high dimensions when involved in classification. As a result, dimensionality reduction methods have to be developed to avoid the curse of dimensionality. Among them, Laplacian eigenmaps (LE) have attracted widespread concentrations. In the original LE, point to point (P2P) distance metric is often adopted for manifold learning. Unfortunately, they show few impacts on robustness to noises. In this paper, a novel supervised dimensionality reduction method, named feature space to feature space distance metric learning (FSDML), is presented. For any point, it can construct a feature space spanned by its k intra-class nearest neighbors, which results in a local projection on its nearest feature space. Thus feature space to feature space (S2S) distance metric will be defined to Euclidean distance between two corresponding projections. On one hand, the proposed S2S distance metric displays superiority on robustness by the local projection. On the other hand, the projection on the nearest feature space contributes to fully mining local geometry information hidden in the original data. Moreover, both class label similarity and dissimilarity are also measured, based on which an intra-class graph and an inter-class graph will be individually modeled. Finally, a subspace can be found for classification by maximizing S2S based manifold to manifold distance and preserving S2S based locality of manifolds, simultaneously. Compared to some state-of-art dimensionality reduction methods, experiments validate the proposed method’s performance either on synthesized data sets or on benchmark data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call