Abstract

The classification of hyperspectral imagery (HSI) is an important part of HSI applications. The nearest-regularized subspace (NRS) is an effective method to classify HSI as one of the sparse representation methods. However, its high computational complexity confines usage in a time-critical scene. In order to enhance the computation efficiency of the NRS classifier, this article proposed a new parallel implementation on the graphics processing unit (GPU). First of all, an optimized single-GPU algorithm is designed for parallel computing, and then the multi-GPU version is developed to improve the efficiency of parallel computing. In addition, optimal parameters for the data stream and memory strategy are put forward to adapt a parallel environment. In order to verify the algorithm's effectiveness, the serial algorithm based on central processing unit is used for a comparative experiment. The performance of the multi-GPU approach is tested by two hyperspectral image datasets. Compared with the serial algorithm, the multi-GPU method with four GPUs can achieve up to $360\times$ acceleration.

Highlights

  • O VER the last decade, hyperspectral imagery (HSI) has been garnering growing attention in the remote sensing field [1]–[3]

  • A series of experiments are performed for evaluating the efficiency of single- and multi-graphics processing unit (GPU) implementations

  • The computational efficiency comparison for each step can be analyzed to demonstrate the effectiveness of our methods based on both single and multi-GPU implementations

Read more

Summary

INTRODUCTION

O VER the last decade, hyperspectral imagery (HSI) has been garnering growing attention in the remote sensing field [1]–[3]. In terms of one device, the performance of algorithms in the hyperspectral image meets more challenges because of the high computational complexity of the classification, as mentioned in earlier models All of these methods are time-consuming and limited their application in real-time scenarios. A novel parallel NRS algorithm is proposed for hyperspectral image classification on multi-GPUs. First, an optimized NRS serial method is proposed, which uses memory to store matrix multiplication results to avoid the repeated large matrix multiplication task. An optimized NRS serial method is proposed, which uses memory to store matrix multiplication results to avoid the repeated large matrix multiplication task Based on this optimized serial algorithm, a single-GPU method is proposed to speed up the calculation of complex matrices.

NRS Classifier
Serial Algorithm Analysis
MULTI-GPU PARALLEL CLASSIFICATION ALGORITHM
CUDA-Based GPGPU Application
Single-GPU-Based NRS Algorithm
NRS Algorithm Based on Multi-GPU
EXPERIMENTAL RESULTS
Experiment Setup
Classification Accuracy Analysis
Computational Efficiency Comparison for Two Experiments
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.