Abstract

Hyperspectral remote sensing is widely used in various fields now, because the high spatial resolution and spectral resolution allows it to provide much more information for the object. Abundance estimation is one of the most important topics among r hyperspectral techniques. It makes it possible for us to analyze components of a mixed pixel precisely. But the traditional algorithms, such as Least Squares Error and Orthogonal Subspace Projection, are designed with too many operations of matrix inversion and multiplication, making them slow working on software and difficult to be realized on hardware. As a result, these algorithms cannot meet real-time demand of many applications well where remote sensing images with large amount of data should be processed quickly. Recently, a new algorithm named Orthogonal Vector Projection is prompted, which estimates abundance of endmembers in mixed pixel by Gram-Schmidt orthogonalization and doesn't involve any matrix inversion and is suitable for parallel computation. Here in this paper, a GPU solution for the Orthogonal Vector Projection (GPU-OVP) is detailed based on CUDA. It takes advantage of Gram-Schmidt process in the algorithm, and designs it into parallel pattern. So, the new version is much faster than the original algorithms running on the CPU. Furthermore, algorithms of Least Squares Error and Orthogonal Subspace Projection are also realized both on CPU and GPU for comparison. From the experimental results, it can be found that the prompted GPU-OVP method is faster than the CPU one and faster than GPU-OSP. At the same time, it is easier to realize than the LSE. It is quite potential for real-time requirement of application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call