Abstract

Active domain adaptation aims to enhance model adaptation performance by annotating a limited number of informative unlabeled target data. Traditional active learning strategies for semantic segmentation often neglect the presence of domain shifts, resulting in suboptimal results in domain adaptation scenarios. In this paper, we present a novel active domain adaptation approach for semantic segmentation that maximizes segmentation performance under domain shifts with a limited number of queried target labels. To recognize the most valuable samples for labeling, we introduce a new acquisition strategy. This strategy leverages a target domainness map to identify the most informative samples for reducing the domain gap and employs region-aware prediction uncertainty to explore ambiguous samples. Meanwhile, to optimize the efficiency of the acquisition strategy, we dynamically adjust the balance between prediction uncertainty and target domainness over the selection rounds. To further bolster adaptation performance, a smooth loss function is employed for the target data, which promotes consistency in local predictions. Extensive experiments on two benchmarks, GTAV → Cityscapes and SYNTHIA → Cityscapes, demonstrate that our method surpasses existing active domain adaptation methods for semantic segmentation. Moreover, it achieves comparable results to supervised performance with only 5% annotations in the target domain, validating the effectiveness of our method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.