Abstract
Unsupervised domain adaptation (UDA) avoids expensive data annotation for the unlabeled target domains by fully utilizing the knowledge of existing source domain. In practice, the target data are usually highly heterogeneous that mix multiple latent domains. And sometimes the source data involve the user private information which is forbidden to access directly. To this end, this paper tackles the blending-target data under the source-available setting, and for the first time under the source-free setting. Specifically, we devise a novel Cross-domain Knowledge Collaboration (CdKC) framework, which mainly includes a prediction network and an adaptation network. The complementarity of two networks are exploited to explore the intrinsic structure in targets. Specifically, CdKC is capable of learning domain-invariant space, and disentangling domain-specific feature simultaneously to boost the UDA performance greatly. A total of 12 tasks are conducted on three visual datasets to verify the superior performance of CdKC by comparing with the state-of-the-art models designed under 4 different UDA settings. The experiments demonstrate that the accuracy of CdKC model still exceeds D-CGCT 0.4% on Office dataset and 1.2% on Office–Home dataset although D-CGCT can access source data and domain labels of targets that CdKC cannot do. It verifies the effectiveness of CdKC although under much looser source and target domain restrictions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.