Abstract

ABSTRACT Deep learning brought a new method for hyperspectral image (HSI) classification, in which images are usually pre-processed by reducing their dimensions before being packaged into pieces to be input to the deep network for feature extraction. However, the learning capability of convolutional kernels of fixed dimensions is usually limited, and thus they are inclined to cause losses of feature details. In this paper, a new global-local block spatial-spectral fusion attention (GBSFA) model is proposed. An improved Inception structure is designed to extract the feature information of the global block, and the self-attention mechanism and spatial pyramid pooling (SPP) are applied to focus on the interclass edge feature information of the local block. Combined with long-short term memory (LSTM) networks, the effective information of the spectral dimension is extracted. Finally, the features extracted from the spatial dimension and the spectral dimension are conveyed in the full connection layer for classification training. Experimental results show that the classification accuracy of the proposed approach is higher than that of other comparative methods using small training sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call