Abstract

Graphics Processing Units (GPU) are becoming a widespread tool for general-purpose scientific computing, and are attracting interest for future onboard satellite image processing payloads due to their ability to perform massively parallel computations. This paper describes the GPU implementation of an algorithm for onboard lossy hyperspectral image compression, and proposes an architecture that allows to accelerate the compression task by parallelizing it on the GPU. The selected algorithm was amenable to parallel computation owing to its block-based operation, and has been optimized here to facilitate GPU implementation incurring a negligible overhead with respect to the original single-threaded version. In particular, a parallelization strategy has been designed for both the compressor and the corresponding decompressor, which are implemented on a GPU using Nvidia's CUDA parallel architecture. Experimental results on several hyperspectral images with different spatial and spectral dimensions are presented, showing significant speed-ups with respect to a single-threaded CPU implementation. These results highlight the significant benefits of GPUs for onboard image processing, and particularly image compression, demonstrating the potential of GPUs as a future hardware platform for very high data rate instruments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.