Abstract

With the development of supervised deep neural networks, classification performance on existing remote sensing scene datasets has been markedly improved. However, supervised learning methods rely heavily on large-scale tagged examples to obtain a better prediction performance. The lack of large-scale tagged remote sensing scene images has become the primary bottleneck in scene classification. To deal with this issue, a novel scene classification method using self-supervised gated self-attention generative adversarial networks (GANs) with similarity loss is proposed. Specifically, the gated self-attention module is first introduced into GANs to focus on key scene areas and filter useless information for strengthening feature representations. Then, the pyramidal convolution block is introduced into the residual block of the discriminator to capture different levels of details in the image using different types of filters with varying sizes and depths for enhancing the feature representations of the discriminator. Additionally, a novel similarity loss item is integrated into the discriminator to leverage self-supervised learning. Besides, spectral normalization is introduced into both the generative network and discriminative network to stabilize training and enhance feature representations. The architecture of multilevel feature fusion is integrated into the discriminative network to achieve more discriminant representation. Experimental results on the AID and NWPU-RESISC45 datasets show that the proposed method achieves the best performance compared to the existing unsupervised classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call