Abstract

ABSTRACT Manifold learning, also called nonlinear dimensionality reduc tion, affords a way to understand and visualize the structure of nonlinear hyperspectral datasets. These methods use graphs to represent the manifold topology, and use metrics like geodesic distance, allowing embedding higher dimension objects into lower dimension. However the complexities of some manifold learning algorithms are* : 7 ;, therefore they are very slow (high computational algorithms). In this paper we present a CUDA-based parallel implementation of the three most popular manifold learning algorithms like Isomap, Locally linear embedding, and Laplacian eigenmaps, using CUDA multi-thread model. The result of this dimensionality reduction was employed in segmentation using active contours as an application of these reduced hyperspectral images. The manifold learning algorithms were implemented on a 64-bit workstation equipped with a quad-core Intel® Xeon with 12 GB RAM and two NVIDIA Tesla C1060 GPU cards. Manifold learning outperforms significantly and achieve up to 26x spee dup. It also shows good scalability where varying the size of the dataset and the number of K nearest neighbors. Keywords: Manifold Learning, Nonlinear dimensionality reduction, Isomap, Locally linear embedding, Laplacian eigenmap, CUDA, GPU, Shortest Path, Graph.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.