Abstract

The generalization performance for semantic segmentation remains a major challenge when the data distributions between the source and target domain mismatch. Unsupervised domain adaptation (UDA) approaches are proposed to mitigate the problem above, among which entropy-minimization-based methods have gained more and more attention. However, the methods merely follow the cluster assumption sharpening the prediction distribution, thus have limited performance improvement. Without additional priors, the entropy loss can easily over-sharpen the prediction distribution, which brings noisy information into the learning process. On the other hand, the gradient of the entropy loss is strongly biased toward easy samples, also leading to limited generalization advances. In this paper, we firstly propose a pixel-level consistency regularization method, which introduces the smoothness prior to the UDA problem. Furthermore, we propose the neutral cross-entropy loss based on the consistency regularization, and reveal that its internal neutralization mechanism mitigates the over-sharpness of entropy minimization via the flatness effect of consistency regularization. We also demonstrate that the gradient bias toward easy samples is inherently tackled via the neutral cross-entropy loss. The experiments show that the proposed method has outperformed state-of-the-art methods in two synthetic-to-real experiments, only using the lightweight network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.