Abstract

Remote sensing deals with huge variations in geography, acquisition season, and a plethora of sensors. Considering the difficulty of collecting labeled data uniformly representing all scenarios, data-hungry deep learning models are often trained with labeled data in a source domain that is limited in the above-mentioned aspects. Domain adaptation (DA) methods can adapt such model for applying on target domains with different distributions from the source domain. However, most remote sensing DA methods are designed for single-target, thus requiring a separate target classifier to be trained for each target domain. To mitigate this, we propose multitarget DA in which a single classifier is learned for multiple unlabeled target domains. To build a multitarget classifier, it may be beneficial to effectively aggregate features from the labeled source and different unlabeled target domains. Toward this, we exploit coteaching based on the graph neural network that is capable of leveraging unlabeled data. We use a sequential adaptation strategy that first adapts on the easier target domains assuming that the network finds it easier to adapt to the closest target domain. We validate the proposed method on two different datasets, representing geographical and seasonal variation. Code is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://gitlab.lrz.de/ai4eo/da-multitarget-gnn/</uri> .

Highlights

  • Most deep learning based methods assume that the training data and test data are drawn from the same distribution

  • Some works in the computer vision literature have addressed this issue by designing methods to adapt to multiple target domains simultaneously from a single source domain [6], a setting called as Multi-target Domain Adaptation (MTDA)

  • The contributions of this work are as follows: 1) We propose a graph neural networks (GNNs) based method for multi-target domain adaptation that starts by learning a classifier on the source domain and incrementally updates it on the target domains

Read more

Summary

Introduction

Most deep learning based methods assume that the training data and test data are drawn from the same distribution. Such assumption often does not hold in remote sensing as differences are induced by geographic variation, differences in acquisition season and sensor. Most domain adaptation methods adapt a single unlabeled target from a single labeled source domain Generative modeling [2], adversarial training [3], and statistical alignment [4], [5] Such models are not suitable for practical setting in remote sensing as we may come across many target domains and separate model needs to be trained for each target domain. Some works in the computer vision literature have addressed this issue by designing methods to adapt to multiple target domains simultaneously from a single source domain [6], a setting called as Multi-target Domain Adaptation (MTDA)

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call