Abstract

Given trained models from multiple source domains, how can we predict the labels of unlabeled data in a target domain? Unsupervised multi-source domain adaptation (UMDA) aims for predicting the labels of unlabeled target data by transferring the knowledge of multiple source domains. UMDA is a crucial problem in many real-world scenarios where no labeled target data are available. Previous approaches in UMDA assume that data are observable over all domains. However, source data are not easily accessible due to privacy or confidentiality issues in a lot of practical scenarios, although classifiers learned in source domains are readily available. In this work, we target data-free UMDA where source data are not observable at all, a novel problem that has not been studied before despite being very realistic and crucial. To solve data-free UMDA, we propose DEMS (Data-free Exploitation of Multiple Sources), a novel architecture that adapts target data to source domains without exploiting any source data, and estimates the target labels by exploiting pre-trained source classifiers. Extensive experiments for data-free UMDA on real-world datasets show that DEMS provides the state-of-the-art accuracy which is up to 27.5% point higher than that of the best baseline.

Highlights

  • Given trained models from multiple source domains, how can we predict the labels of unlabeled data in a target domain? Unsupervised multi-source domain adaptation (UMDA) aims at predicting the labels of unlabeled target data by utilizing the knowledge of multiple source domains

  • Many previous works [1,2,3,4,5,6,7,8,9] for UMDA have focused on finding domain-invariant features z of data x to transfer the knowledge of conditional probability p(y|z), where y represents the label of data x, from the source domains to the target domain

  • We focus on data-free UMDA (Fig 1b), a more difficult but practical problem of knowledge transfer from multiple source domains to an unlabeled target domain

Read more

Summary

Introduction

Given trained models from multiple source domains, how can we predict the labels of unlabeled data in a target domain? Unsupervised multi-source domain adaptation (UMDA) aims at predicting the labels of unlabeled target data by utilizing the knowledge of multiple source domains. Given trained models from multiple source domains, how can we predict the labels of unlabeled data in a target domain? Many previous works [1,2,3,4,5,6,7,8,9] for UMDA have focused on finding domain-invariant features z of data x to transfer the knowledge of conditional probability p(y|z), where y represents the label of data x, from the source domains to the target domain. We focus on data-free UMDA (Fig 1b), a more difficult but practical problem of knowledge transfer from multiple source domains to an unlabeled target domain. The main challenges are that: 1) we cannot directly estimate the target conditional probability p(y|x) since target labels are not given, and 2) we cannot directly learn the shared manifold z between domains since there is no information of source domain data distributions p(x). Since data-free UMDA is a new problem without previous studies, we introduce several

Method Best Single Source
Related work
Source data accessibility Accessible Accessible Inaccessible
Method overview
Method details
Experiments
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call