Abstract

Deep learning has achieved excellent performance in remote-sensing image scene classification, since a large number of datasets with annotations can be applied for training. However, in actual applications, there is just a few annotated samples and a large number of unannotated samples in remote-sensing images, which leads to overfitting of the deep model and affects the performance of scene classification. In order to address these problems, a semi-supervised representation consistency Siamese network (SS-RCSN) is proposed for remote-sensing image scene classification. First, considering intraclass diversity and interclass similarity of remote-sensing images, Involution-generative adversarial network (GAN) is utilized to extract the discriminative features from remote-sensing images via unsupervised learning. Then, Siamese network with a representation consistency loss is proposed for semi-supervised classification, which aims to reduce the differences of labeled and unlabeled data. Experimental results on UC Merced dataset, RESICS-45 dataset, aerial image dataset (AID), and RS dataset demonstrate that our method yields superior classification performance compared with other semi-supervised learning (SSL) methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call