Abstract

Unsupervised Domain Adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. The strategy of aligning the two domains in latent feature space via metric discrepancy or adversarial learning has achieved considerable progress. However, these existing approaches mainly focus on adapting the entire image and ignore the bottleneck that occurs when forced adaptation of uninformative domain-specific variations undermines the effectiveness of learned features. To address this problem, we propose a novel component called Informative Feature Disentanglement (IFD), which is equipped with the adversarial network or the metric discrepancy model, respectively. Accordingly, the new network architectures, named IFDAN and IFDMN, enable informative feature refinement before the adaptation. The proposed IFD is designed to disentangle informative features from the uninformative domain-specific variations, which are produced by a Variational Autoencoder (VAE) with lateral connections from the encoder to the decoder. We cooperatively apply the IFD to conduct supervised disentanglement for the source domain and unsupervised disentanglement for the target domain. In this way, informative features are disentangled from the domain-specific details before the adaptation. Extensive experimental results on three gold-standard domain adaptation datasets, e.g., Office31, Office-Home and VisDA-C, demonstrate the effectiveness of the proposed IFDAN and IFDMN models for UDA.

Highlights

  • We propose Informative Feature Disentanglement (IFD) module to select regions that can be adapted, which is integrated into the adversarial network and metric discrepancy module, leading to the novel architectures named Informative Feature Disentanglement Adversarial Network (IFDAN) and Informative Feature Disentanglement Metric Discrepancy Network (IFDMN), respectively

  • IFDMN reconstructs the target domain to mine the structure of the target domain, and further uses the ladder variational connections to suppress the redundant information of the target domain

  • We propose a novel module named Informative Feature Disentanglement (IFD) aiming at only applying the high-level semantic features to adapt and filtering out the redundant information

Read more

Summary

Introduction

The recent evidence [7] indicates that DNNs have a strong dependency on the dataset with which they are originally trained, and the learned features cannot be transferred to a different domain without adjusting [8], [9]. This difficulty in transferring is caused by domain shift [10]; i.e., predictors trained on a source domain undergo a drastic drop in performance when applied to the target domain. The objective of DA is to leverage labeled data from one or more similar domains (source domain) to improve the learning of the interested domain (target domain) that has a distribution different from but related to the source distribution

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call