Abstract
With the rapid development of deep learning technology, semantic segmentation methods have been widely used in remote sensing data. A pretrained semantic segmentation model usually cannot perform well when the testing images (target domain) have an obvious difference from the training data set (source domain), while a large enough labeled data set is almost impossible to be acquired for each scenario. Unsupervised domain adaptation (DA) techniques aim to transfer knowledge learned from the source domain to a totally unlabeled target domain. By reducing the domain shift, DA methods have shown the ability to improve the classification accuracy for the target domain. Hence, in this letter, we propose an unsupervised adversarial DA network that converts deep features into 2-D feature curves and reduces the discrepancy between curves from the source domain and curves from the target domain based on a conditional generative adversarial networks (cGANs) model. Our proposed DA network is able to improve the semantic labeling accuracy when we apply a pretrained semantic segmentation model to the target domain. To test the effectiveness of the proposed method, experiments are conducted on the International Society for Photogrammetry and Remote Sensing (ISPRS) 2-D Semantic Labeling data set. Results show that our proposed network is able to stably improve overall accuracy not only when the source and target domains are from the same city but with different building styles but also when the source and target domains are from different cities and acquired by different sensors. By comparing with a few state-of-the-art DA methods, we demonstrate that our proposed method achieves the best cross-domain semantic segmentation performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.