Abstract

Unsupervised domain adaptation (UDA) makes predictions for the target domain data while manual annotations are only available in the source domain. Previous methods minimize the domain discrepancy neglecting the class information, which may lead to misalignment and poor generalization performance. To tackle this issue, this paper proposes contrastive adaptation network (CAN) that optimizes a new metric named Contrastive Domain Discrepancy explicitly modeling the intra-class domain discrepancy and the inter-class domain discrepancy. To optimize CAN, two technical issues need to be addressed: 1) the target labels are not available; and 2) the conventional mini-batch sampling is imbalanced. Thus we design an alternating update strategy to optimize both the target label estimations and the feature representations. Moreover, we develop class-aware sampling to enable more efficient and effective training. Our framework can be generally applied to the single-source and multi-source domain adaptation scenarios. In particular, to deal with multiple source domain data, we propose: 1) multi-source clustering ensemble which exploits the complementary knowledge of distinct source domains to make more accurate and robust target label estimations; and 2) boundary-sensitive alignment to make the decision boundary better fitted to the target. Experiments are conducted on three real-world benchmarks (i.e., Office-31 and VisDA-2017 for the single-source scenario, DomainNet for the multi-source scenario). All the results demonstrate that our CAN performs favorably against the state-of-the-art methods. Ablation studies also verify the effectiveness of each key component of our proposed system.

Highlights

  • R ECENT advancements in deep neural networks have successfully improved a variety of learning problems [1], [2], [3]

  • Among the recent work on Unsupervised Domain Adaptation (UDA), a seminal line of work proposed by Long et al [13], [14] aims at minimizing the discrepancy between the source and target domain in the deep neural network, where the domain discrepancy is measured by Maximum Mean Discrepancy (MMD) [13] and Joint MMD (JMMD) [14]

  • We introduce a new discrepancy metric named Contrastive Domain Discrepancy (CDD), which can be embedded in the proposed Contrastive Adaptation Network (CAN) for performing the class-aware alignment during the end-to-end training

Read more

Summary

INTRODUCTION

R ECENT advancements in deep neural networks have successfully improved a variety of learning problems [1], [2], [3]. In the absence of labeled data from the target domain, Unsupervised Domain Adaptation (UDA) methods have emerged to mitigate the domain shift in data distributions [5], [6], [7], [8], [9], [10], [11], [12]. It relates to unsupervised learning as it requires manual labels only from the source domain and zero labels from the target. MMD and JMMD have proven effective in many computer vision problems and demonstrated the state-of-the-art results on several UDA benchmarks [13], [14]

Proposed Method
RELATED WORK
Maximum Mean Discrepancy Revisiting
METHODOLOGY
Contrastive Domain Discrepancy
Contrastive Adaptation Network
Optimizing CAN
Initialize Otc
Multi-Source Contrastive Adaptation Network
Setups
Comparison with the state-of-the-art
Method
Findings
Ablation studies

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.