Abstract

Principal component analysis (PCA) has been extended to a series of classical methods in dimensionality reduction for unsu¬pervised learning. However, the differences among the projected data samples may not be sufficient by PCA, causing problems both in data visualization and recognition. Though some nonlinear dimensionality reduction methods have been proposed to preserve the samples’ differences, their nonlinear mappings hide the relationships among the data features, e.g., Isometric mapping (Isomap) and Locally Linear Embedding (LLE). In this paper, we propose a linear dimensionality reduction method, called Divergent Projection Analysis (DPA), to preserve the largest differences among the samples. Our DPA projects the sam¬ples into a low-dimensional space such that any two projected samples have a certain distance, resulting in solving a nonconvex optimization problem. The nonconvex problem is solved by the subgradient descent algorithm and particle swarm optimiza¬tion algorithm, and their solution qualities are compared in the experiments. Experiments on a synthetic dataset and three face recognition problems confirm the effectiveness of the proposed method compared with other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call