Abstract

Representing features at multiple scales is of great significance for hyperspectral image classification. However, the most existing methods improve the feature representation ability by extracting features with different resolutions. Moreover, the existing attention methods have not taken full advantage of the HSI data, and their receptive field sizes of artificial neurons in each layer are identical, while in neuroscience, the receptive field sizes of visual cortical neurons adapt to the neural stimulation. Therefore, in this paper, we propose a Res2Net with spectral-spatial and channel attention (SSCAR2N) for hyperspectral image classification. To effectively extract multi-scale features of HSI image at a more granular level while ensuring a small amount of calculation and low parameter redundancy, the Res2Net block is adopted. To further recalibrate the features from spectral, spatial and channel dimensions simultaneously, we propose a visual threefold (spectral, spatial and channel) attention mechanism, where a dynamic neuron selection mechanism that allows each neuron to adaptively adjust the size of its receptive fields based on the multiple scales of the input information is designed. The comparison experiments on three benchmark hyperspectral image data sets demonstrate that the proposed SSCAR2N outperforms several state-of-the-art deep learning based HSI classification methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.