Abstract
Domain adaptation is one of the prominent strategies for handling both the scarcity of pixel-level ground truth and the domain shift, that is widely encountered in large-scale land use/cover map calculation. Studies focusing on adversarial domain adaptation via re-styling source domain samples, commonly through generative adversarial networks (GANs), have reported varying levels of success, yet they suffer from semantic inconsistencies, visual corruptions, and often require a large number of target domain samples. In this letter, we propose a new lightweight unsupervised domain adaptation (UDA) method for the semantic segmentation of very high-resolution remote sensing images, based on an image-to-image translation (I2IT) approach, via an encoder–decoder strategy where latent content representations are mixed across domains, and a perceptual network module and loss function enforce visual semantic consistency. We show through cross-domain comparative experiments that it: 1) leads to semantically consistent images; 2) can operate with a single target domain sample (i.e., one-shot); and 3) at a fraction of the number of parameters required from the state-of-the-art methods, while still outperforming them. Code is available at github.com/Sarmadfismael/RSOS_I2I.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.