Abstract

The majority of existing works explore Unsupervised Domain Adaptation (UDA) with an ideal assumption that samples in both domains are available and complete. In real-world applications, however, this assumption does not always hold. For instance, data-privacy is becoming a growing concern, the source domain samples may be not publicly available for training, leading to a typical Source-Free Domain Adaptation (SFDA) problem. Traditional UDA methods would fail to handle SFDA since there are two challenges in the way: the data incompleteness issue and the domain gaps issue. In this paper, we propose a visually SFDA method named Adversarial Style Matching (ASM) to address both issues. Specifically, we first train a style generator to generate source-style samples given the target images to solve the data incompleteness issue. We use the auxiliary information stored in the pre-trained source model to ensure that the generated samples are statistically aligned with the source samples, and use the pseudo labels to keep semantic consistency. Then, we feed the target domain samples and the corresponding source-style samples into a feature generator network to reduce the domain gaps with a self-supervised loss. An adversarial scheme is employed to further expand the distributional coverage of the generated source-style samples. The experimental results verify that our method can achieve comparative performance even compared with the traditional UDA methods with source samples for training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call