Abstract

This article proposes a novel hierarchical residual network with attention mechanism (HResNetAM) for hyperspectral image (HSI) spectral-spatial classification to improve the performance of conventional deep learning networks. The straightforward convolutional neural network-based models have limitations in exploiting the multiscale spatial and spectral features, and this is the key factor in dealing with the high-dimensional nonlinear characteristics present in HSIs. The proposed hierarchical residual network can extract multiscale spatial and spectral features at a granular level, so the receptive fields range of this network will be increased, which can enhance the feature representation ability of the model. Besides, we utilize the attention mechanism to set adaptive weights for spatial and spectral features of different scales, and this can further improve the discriminative ability of extracted features. Furthermore, the double branch structure is also exploited to extract spectral and spatial features with corresponding convolution kernels in parallel, and the extracted spatial and spectral features of multiple scales are fused for hyperspectral image classification. Four benchmark hyperspectral datasets collected by different sensors and at different acquisition time are employed for classification experiments, and comparative results reveal that the proposed method has competitive advantages in terms of classification performance when compared with other state-of-the-art deep learning models.

Highlights

  • R EMOTE sensing technology is one of the most important components in the field of earth observation (EO), which can perceive and recognize the observed scenes using their different reflection characteristics without making physical contact with the objects

  • For the sake of evaluating the classification ability of HResNetAM model toward different numbers of scales and kernels, we evaluate the influence of these two parameters using cross validation strategy

  • We propose the hierarchical residual network (HResNet) with attention mechanism which can learn spectral and spatial features with different scales, and these features are fused for joint classification

Read more

Summary

Introduction

R EMOTE sensing technology is one of the most important components in the field of earth observation (EO), which can perceive and recognize the observed scenes using their different reflection characteristics without making physical contact with the objects. According to the unique spectral and spatial characteristics, HSI classification aims to determine the ground category of each pixel, which has been widely used in, e.g., environmental monitoring, resource management, urban planning, military, and security applications over the past decade [2]. 3) Effectively integrating spatial information for spectral-spatial classification to improve pixel-wise classification performance. Aiming to effectively solve above typical problems, lots of classic machine learning models have been exploited for HSI classification [3]. Deep learning techniques revolutionize the ways of remote sensing image processing, especially in the HSIs classification field [5]–[8]. According to the feature types employed for classification, HSI classification methods based on deep learning can be generally divided into three categories: Spectral-feature based, spatial-feature based, and spectral-spatial-feature based networks. Due to the fact that both spatial information and spectral information make contributions to HSI classification, the spatial-feature and spectral-spatial-feature-based networks have witnessed more interest in recent years [9]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.