Abstract

Deep learning-based approaches for land cover segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. Due to the tedious and labor-intensive labeling process, transferring the models trained with label-rich source data to nonannotated target data becomes a popular problem in recent years. Because of the domain shift, the difference between the source and target distributions can degrade the accuracy on target data if the training occurs directly in a source domain without proper domain adaptation (DA). In this letter, we propose a U-Net based network for DA in the context of semantic segmentation. The model is trained in the source domain with ground truth and test in the target domain without any annotations. We introduce the layer alignment method and the feature covariance loss function to alleviate the domain shift between different domains. To further enhance the adapted model, we adopt a self-training method to improve segmentation performance. Experimental results on the images from the 2018 IEEE Geoscience and Remote Sensing Society (GRSS) data fusion contest and the International Society for Photogrammetry and Remote Sensing (ISPRS) 2-D semantic labeling contest data set reveal the effectiveness of the proposed model. By reducing the domain distribution difference, our method shows better performance compared with the mainstream unsupervised DA methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call