Abstract

The existing deep networks have shown excellent performance in remote sensing scene classification, which generally requires a large amount of class-balanced training samples. However, deep networks will result in underfitting with imbalanced training samples since they can easily bias toward the majority classes. To address these problems, a multi-granularity decoupling network (MGDNet) is proposed for remote sensing image scene classification. To begin with, we design a multi-granularity complementary feature representation (MGCFR) method to extract fine-grained features from remote sensing images, which utilizes region-level supervision to guide the attention of the decoupling network. Second, a class-imbalanced pseudo-label selection (CIPS) approach is proposed to evaluate the credibility of unlabeled samples. Finally, the diversity component feature (DCF) loss function is developed to force the local features to be more discriminative. Our model performs satisfactorily on three public datasets: UC Merced (UCM), NWPU-RESISC45, and Aerial Image Dataset (AID). Experimental results show that the proposed model yields superior performance compared with other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call