Abstract

Semantic segmentation using deep neural networks is an important component of aerial image understanding. However, models trained using data from one domain may not generalize well to another domain due to a domain shift between data distributions in the two domains. Such a domain gap is common in aerial images due to large visual appearance changes, and so substantial accuracy loss may occur when using a trained model for inference on new data. In this paper, we propose a novel unsupervised domain adaptation framework to address domain shift in the context of semantic segmentation of aerial images. To this end, we address the problem of domain shift by learning class-aware distribution differences between the source and target domains. Further, we employ entropy minimization on the target domain to produce high-confidence predictions. We demonstrate the effectiveness of the proposed approach using a challenge segmentation dataset by ISPRS, and show improvement over state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call