Abstract

ABSTRACT Deep learning has achieved impressive success in computer vision,especially remote sensing. It is well known that different deep models are ableto extract different kinds of features from remote sensing images. For example,the convolutional neural networks (CNN) can extract neighbourhood spatialfeatures in the short-range region, the graph convolutional networks (GCN) canextract structural features in the middle- and long-range region, and the encoder-decoder(ED) can obtain the reconstruction features from an image. Thus, it is challenging to design a modelthat can combine the different models to extract fused features in ahyperspectral image classification task. To this end, this paper proposes athree-branch attention deep model (TADM) for the classification ofhyperspectral images. The model can be divided into three branches: graphconvolutional neural network, convolutional neural network, and deepencoder-decoder. These three branches first extract structural features,spatial-spectral joint features and reconstructed encoded features fromhyperspectral images, respectively. Then, a cross-fusion strategy and an attentionmechanism are employed to automatically learn the fusion parameters andcomplete the feature fusion. Finally,the hybrid features are fed into a standard classifier for pixel-levelclassification. Extensive experiments on two real-world hyperspectral datasets (Houstonand Trento) demonstrate the effectiveness and superiority of the proposedmethod. Compared with other baseline classification methods, such as FuNet-Cand Two-Branch CNN(H), proposed method achieves the highest classificationresults. Specifically, overall classification accuracies of 93.25% and 95.84%were obtained on the Houston and Trento data, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call