Abstract

Unsupervised multidomain adaptation attracts increasing attention as it delivers richer information when tackling a target task from an unlabeled target domain by leveraging the knowledge attained from labeled source domains. However, it is the quality of training samples, not just the quantity, that influences transfer performance. In this article, we propose a multidomain adaptation method with sample and source distillation (SSD), which develops a two-step selective strategy to distill source samples and define the importance of source domains. To distill samples, the pseudo-labeled target domain is constructed to learn a series of category classifiers to identify transfer and inefficient source samples. To rank domains, the agreements of accepting a target sample as the insider of source domains are estimated by constructing a domain discriminator based on selected transfer source samples. Using the selected samples and ranked domains, transfer from source domains to the target domain is achieved by adapting multilevel distributions in a latent feature space. Furthermore, to explore more usable target information which is expected to enhance the performance across domains of source predictors, an enhancement mechanism is built by matching selected pseudo-labeled and unlabeled target samples. The degrees of acceptance learned by the domain discriminator are finally employed as source merging weights to predict the target task. Superiority of the proposed SSD is validated on real-world visual classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call