Abstract

Manifold learning is a popular recent approach to nonlinear dimensionality reduction. While conventional manifold learning methods are based on the assumption that the data distribution is uniform. They are hard to recover the manifold structure of data in low-dimension space when the data is distributed non-uniformly. This paper presents an improved Laplacian Eigenmaps algorithm, which improved the classical Laplacian Eigenmaps (LE) algorithm by introduce a novel neighbors selection method based on local density. This method can optimize the process of intrinsic structure discovery, and thus reducing the impact of data distribution variation. Several compared experiments between conventional manifold learning methods and improved LE are conducted on synthetic and real-world datasets. The experimental results demonstrate the effectiveness and robustness of our algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call