Abstract

Remote sensing (RS) image semantic segmentation using deep convolutional neural networks (DCNNs) has shown great success in various applications. However, the high dependence on annotated data makes it challenging for DCNNs to adapt to different RS scenes. To address this challenge, we propose a cross-domain RS image semantic segmentation task that considers ground sampling distance, remote sensing sensor variation, and different geographical landscapes as the main factors causing domain shifts between source and target images. To mitigate the negative impact of domain shift, we propose a self-training guided disentangled adaptation network (ST-DASegNet) that consists of source and target student backbones to extract source-style and target-style features. To align cross-domain single-style features, we adopt feature-level adversarial learning. We also propose a domain disentangled module (DDM) to extract universal and distinct features from single-domain cross-style features. Finally, we fuse these features and generate predictions using source and target student decoders. Moreover, we employ an exponential moving average (EMA) based cross-domain separated self-training mechanism to ease the instability and disadvantageous effect during adversarial optimization. Our experiments on several prominent RS datasets (Potsdam, Vaihingen, and LoveDA) demonstrate that ST-DASegNet outperforms previous methods and achieves new state-of-the-art results. Visualization and analysis also confirm the interpretability of ST-DASegNet. The code is publicly available at https://github.com/cv516Buaa/ST-DASegNet.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.