Abstract

Class-Incremental Unsupervised Domain Adaptation (CI-UDA) requires the model can continually learn several steps containing unlabeled target domain samples, while the source-labeled dataset is available all the time. The key to tackling CI-UDA problem is to transfer domain-invariant knowledge from the source domain to the target domain, and preserve the knowledge of the previous steps in the continual adaptation process. However, existing methods introduce much biased source knowledge for the current step, causing negative transfer and unsatisfying performance. To tackle these problems, we propose a novel CI-UDA method named Pseudo-Label Distillation Continual Adaptation (PLDCA). We design Pseudo-Label Distillation module to leverage the discriminative information of the target domain to filter the biased knowledge at the class- and instance-level. In addition, Contrastive Alignment is proposed to reduce domain discrepancy by aligning the class-level feature representation of the confident target samples and the source domain, and exploit the robust feature representation of the unconfident target samples at the instance-level. Extensive experiments demonstrate the effectiveness and superiority of PLDCA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call