Abstract

ABSTRACT Deep learning brought a new method for hyperspectral image (HSI) classification, in which images are usually pre-processed by reducing their dimensions before being packaged into pieces to be input to the deep network for feature extraction. However, the learning capability of convolutional kernels of fixed dimensions is usually limited, and thus they are inclined to cause losses of feature details. In this paper, a new global-local block spatial-spectral fusion attention (GBSFA) model is proposed. An improved Inception structure is designed to extract the feature information of the global block, and the self-attention mechanism and spatial pyramid pooling (SPP) are applied to focus on the interclass edge feature information of the local block. Combined with long-short term memory (LSTM) networks, the effective information of the spectral dimension is extracted. Finally, the features extracted from the spatial dimension and the spectral dimension are conveyed in the full connection layer for classification training. Experimental results show that the classification accuracy of the proposed approach is higher than that of other comparative methods using small training sets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.