Abstract

Hyperspectral images (HSIs) have been widely used in many fields of application, but it is still extremely challenging to obtain higher classification accuracy, especially when facing a smaller number of training samples in practical applications. It is very time-consuming and laborious to acquire enough labeled samples. Consequently, an efficient hybrid dense network was proposed based on a dual-attention mechanism, due to limited training samples and unsatisfactory classification accuracy. The stacked autoencoder was first used to reduce the dimensions of HSIs. A hybrid dense network framework with two feature-extraction branches was then established in order to extract abundant spectral–spatial features from HSIs, based on the 3D and 2D convolutional neural network models. In addition, spatial attention and channel attention were jointly introduced in order to achieve selective learning of features derived from HSIs. The feature maps were further refined, and more important features could be retained. To improve computational efficiency and prevent the overfitting, the batch normalization layer and the dropout layer were adopted. The Indian Pines, Pavia University, and Salinas datasets were selected to evaluate the classification performance; 5%, 1%, and 1% of classes were randomly selected as training samples, respectively. In comparison with the REF-SVM, 3D-CNN, HybridSN, SSRN, and R-HybridSN, the overall accuracy of our proposed method could still reach 96.80%, 98.28%, and 98.85%, respectively. Our results show that this method can achieve a satisfactory classification performance even in the case of fewer training samples.

Highlights

  • Hyperspectral images (HSIs) contain rich spatial and spectral information, and have been widely used in many fields of application, such as environmental science, precision agriculture, and land cover mapping [1,2,3,4]

  • An autoencoder (AE) is an unsupervised learning method, whose structure is similar to that of a general feedforward neural network; its function is to perform representation learning on the input information, which has been applied to dimension reduction and abnormal data detection [42,43]

  • Aiming at a limited sample size of HSI labeled data and the low classification accuracy of current neural network models, a hybrid dense network with a dual-attention mechanism was proposed from the perspective of network optimization

Read more

Summary

Introduction

Hyperspectral images (HSIs) contain rich spatial and spectral information, and have been widely used in many fields of application, such as environmental science, precision agriculture, and land cover mapping [1,2,3,4]. To solve the above-mentioned problem, feature extraction must be carried out in order to reduce the dimensions of HSIs before inputting them into classifiers. Linear and nonlinear feature-extraction methods are generally applied to HSI classification. Linear dimension-reduction methods cannot well solve the nonlinear problems existing in HSIs, and the deep features cannot be extracted. An autoencoder (AE) is an unsupervised learning method, whose structure is similar to that of a general feedforward neural network; its function is to perform representation learning on the input information, which has been applied to dimension reduction and abnormal data detection [42,43]. A stacked AE (SAE) was built by stacking the basic autoencoders to extract the features from original HSIs and perform dimension reduction. The dimensions of the input data are continuously reduced, and the high-dimensional input data are transformed into low-dimensional features to reduce the original HSI data

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.