Abstract
Current unsupervised domain adaptation (UDA) methods based on GAN (Generative Adversarial Network) architectures assume that source samples arise from a single distribution. These methods have shown compelling results by finding the transformation between source and target domains to reduce the distribution divergence. However, the one-to-one assumption renders the existing GAN-based UDA methods ineffective in a more realistic scenario that source samples are typically collected from diverse sources. In this paper, we present a novel GAN-enabled framework, which we call Multi-Source Adaptation Network (MSAN), for multiple-source domain adaptation (MDA) to mitigate the domain shifts between multiple source domains and the target domain. The proposed framework consists of multiple GAN architectures to learn bidirectional transformations between the source domains and the target domain efficiently and simultaneously. Technically, we introduce a joint feature space to guide the multi-level consistency constraints across all the transformations, in order to preserve the domain-invariant pattern and endow the discriminative power for the unlabeled target samples simultaneously during the adaptation. Moreover, the proposed model can naturally be used to enlarge the target dataset by utilizing the synthetic target images (with ground-truth labels from different source domains) and the pseudo-labeled target images, thereby allowing constructing the target-specific classifier in an unsupervised manner. Experiments demonstrate that our models exceed state-of-the-art results for MDA tasks on several benchmark datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.