Abstract

Representing features at multiple scales is of great significance for hyperspectral image classification. However, the most existing methods improve the feature representation ability by extracting features with different resolutions. Moreover, the existing attention methods have not taken full advantage of the HSI data, and their receptive field sizes of artificial neurons in each layer are identical, while in neuroscience, the receptive field sizes of visual cortical neurons adapt to the neural stimulation. Therefore, in this paper, we propose a Res2Net with spectral-spatial and channel attention (SSCAR2N) for hyperspectral image classification. To effectively extract multi-scale features of HSI image at a more granular level while ensuring a small amount of calculation and low parameter redundancy, the Res2Net block is adopted. To further recalibrate the features from spectral, spatial and channel dimensions simultaneously, we propose a visual threefold (spectral, spatial and channel) attention mechanism, where a dynamic neuron selection mechanism that allows each neuron to adaptively adjust the size of its receptive fields based on the multiple scales of the input information is designed. The comparison experiments on three benchmark hyperspectral image data sets demonstrate that the proposed SSCAR2N outperforms several state-of-the-art deep learning based HSI classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call