In recent years, convolutional neural networks (CNNs)-based methods achieve cracking performance on hyperspectral image (HSI) classification tasks, due to its hierarchical structure and strong nonlinear fitting capacity. Most of them, however, are supervised approaches that need a large number of labeled data to train them. Conventional convolution kernels are fixed shape of rectangular with fixed sizes, which are good at capturing short-range relations between pixels within HSIs but ignore the long-range context within HSIs, limiting their performance. To overcome the limitations mentioned above, we present a dynamic multiscale graph convolutional network (GCN) classifier (DMSGer). DMSGer first constructs a relatively small graph at region-level based on a superpixel segmentation algorithm and metric-learning. A dynamic pixel-level feature update strategy is then applied to the region-level adjacency matrix, which can help DMSGer learn the pixel representation dynamically. Finally, to deeply understand the complex contents within HSIs, our model is expanded into a multiscale version. On the one hand, by introducing graph learning theory, DMSGer accomplishes HSI classification tasks in a semi-supervised manner, relieving the pressure of collecting abundant labeled samples. Superpixels are generally in irregular shapes and sizes which can group only similar pixels in a neighborhood. On the other hand, based on the proposed dynamic-GCN, the pixel-level and region-level information can be captured simultaneously in one graph convolution layer such that the classification results can be improved. Also, due to the proper multiscale expansion, more helpful information can be captured from HSIs. Extensive experiments were conducted on four public HSIs, and the promising results illustrate that our DMSGer is robust in classifying HSIs. Our source codes are available at https://github.com/TangXu-Group/DMSGer.
Read full abstract