Abstract

Although deep learning has been successfully applied in the field of remote sensing image classification, it still requires time-consuming and costly annotations. In recent years, domain adaptation has been witnessed to address this problem as they do not need any human interpreted in the target domain dataset. However, most of the existing works dedicate effort on the circumstance where there is only one source domain and only one target domain. In this paper, we firstly explore one source and multiple target domains issue for remote sensing application and build a challenging mixed multi-target dataset to contribute to the community. Our method constitutes three parts. Firstly, as we are blind for the multitarget domain, we adopt meta learning to divide the mixed multi-target dataset and insert sub-target domain loss as the part of the loss function. Secondly, we apply the adversarial learning to confuse the classifier to discriminate between the source domain images and the whole mixed multi-target domain images. Finally, the meta learning and the adversarial learning are dynamically iterative procedures and the labels for domain classification in mixed multi-target dataset will be updated for a particular iteration. Our method is well-performed in the four common remote sensing dataset (AID, NWPU-RESISC45, UC Merced and WHU-RS19), including five classes (agriculture, forest, river, residential and parking). Our method achieved an average accuracy of 81.59% and outperformed other domain adaptation method. The experiment results indicate our method is promising for large-scale, multi-regional and multi-temporal remote sensing applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call