Abstract

The accuracy of remote sensing image segmentation and classification is known to dramatically decrease when the source and target images are from different sources; while deep learning-based models have boosted performance, they are only effective when trained with a large number of labeled source images that are similar to the target images. In this article, we propose a generative adversarial network (GAN) based domain adaptation for land cover classification using new target remote sensing images that are enormously different from the labeled source images. In GANs, the source and target images are fully aligned in the image space, feature space, and output space domains in two stages via adversarial learning. The source images are translated to the style of the target images, which are then used to train a fully convolutional network (FCN) for semantic segmentation to classify the land cover types of the target images. The domain adaptation and segmentation are integrated to form an end-to-end framework. The experiments that we conducted on a multisource data set covering more than 3500 km <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> with 51 560 256×256 high-resolution satellite images in Wuhan city and a cross-city data set with 11 383 256×256 aerial images in Potsdam and Vaihingen demonstrated that our method exceeded the recent GAN-based domain adaptation methods by at least 6.1% and 4.9% in the mean intersection over union (mIoU) and overall accuracy (OA) indexes, respectively. We also proved that our GAN is a generic framework that can be implemented for other domain transfer methods to boost their performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call