Abstract
In this paper, we develop a new parallel implementation of the iterative error analysis (IEA) algorithm for lossy hyperspectral image compression on graphics processing units (GPUs), an inexpensive parallel computing platform that has recently become very popular in hyperspectral imaging applications. The proposed GPU implementation is tested on several different architectures from NVidia, the main GPU vendor worldwide, and is shown to exhibit real-time performance in the analysis of AVIRIS data sets. The GPU implementation of the IEA represents a step forward towards real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also adjusted in real-time.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.