Abstract

With the development of remote sensing technology, spatial and spectral resolutions of hyperspectral images have become increasingly dense. In order to overcome difficulties in the storage, transmission, and manipulation of hyperspectral images, an effective compression algorithm is requisite. The clustered differential pulse code modulation (C-DPCM), which is a prediction-based hyperspectral image lossless compression algorithm, can achieve a relatively high compression ratio, but its efficiency still requires improvement. This paper presents a parallel implementation of the C-DPCM algorithm on graphics processing units (GPUs) with the compute unified device architecture, which is a parallel computing platform and programming model developed by NVIDIA. Three optimization strategies are utilized to implement the C-DPCM algorithm in parallel, including a version that uses shared memory and registers, a version that employs multistream, and a version that uses multi-GPU. In addition, we studied how to assign all classes to each GPU to minimize the processing time. Finally, we reduced the compression time from approximately half an hour to an hour to several seconds, with almost no loss in accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call