Abstract

The analysis of hyperspectral images is usually very heavy from the computational point-of-view, due to their high dimensionality. In order to avoid this problem, band selection (BS) has been widely used to reduce the dimensionality before the analysis. The aim is to extract a subset of the original bands of the hyperspectral image, preserving most of the information contained in the original data. The BS technique can be performed by prioritizing the bands on the basis of a score, assigned by specific criteria; in this case, BS turns out in the so-called band prioritization (BP). This paper focuses on BP algorithms based on the following parameters: signal-to-noise ratio, kurtosis, entropy, information divergence, variance and linearly constrained minimum variance. In particular, an optimized C serial version has been developed for each algorithm from which two parallel versions have been derived using OpenMP and NVIDIA’s compute unified device architecture. The former is designed for a multi-core CPU, while the latter is designed for a many-core graphics processing unit. For each version of these algorithms, several tests have been performed on a large database containing both synthetic and real hyperspectral images. In this way, scientists can integrate the proposed suite of efficient BP algorithms into existing frameworks, choosing the most suitable technique for their specific applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call