Abstract

Recently, graph convolutional networks (GCNs) have been developed to explore the spatial relationship between pixels, achieving better classification performance of hyperspectral images (HSIs). However, these methods fail to sufficiently leverage the relationship between spectral bands in HSI data. As such, we propose an adaptive cross-attention-driven spatial–spectral graph convolutional network (ACSS-GCN), which is composed of a spatial GCN (Sa-GCN) subnetwork, a spectral GCN (Se-GCN) subnetwork, and a graph cross-attention fusion module (GCAFM). Specifically, Sa-GCN and Se-GCN are proposed to extract the spatial and spectral features by modeling the correlations between spatial pixels and between spectral bands, respectively. Then, by integrating attention mechanism into information aggregation of the graph, the GCAFM, including three parts, i.e., the spatial graph attention block, the spectral graph attention block, and the fusion block, is designed to fuse the spatial and spectral features, and suppress noise interference in Sa-GCN and Se-GCN. Moreover, the idea of the adaptive graph is introduced to explore an optimal graph through backpropagation during the training process. Experiments on two HSI datasets show that the proposed method achieves better performance than other classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call