Abstract

In the open world, various label sets and domain configurations give rise to a variety of Domain Adaptation (DA) setups, including closed-set, partial-set, open-set, and universal DA, as well as multi-source and multi-target DA. It is notable that existing DA methods are generally designed only for a specific setup, and may under-perform in setups they are not tailored to. This paper shifts the common paradigm of DA to Versatile Domain Adaptation (VDA), where one method can handle several different DA setups without any modification. Towards this goal, we first delve into a general inductive bias: class confusion, and then uncover that reducing such pairwise class confusion leads to significant transfer gains. With this insight, we propose one general class confusion loss (CC-Loss) to learn many setups. We estimate class confusion based only on classifier predictions and minimize the class confusion to enable accurate target predictions. Further, we improve the loss by enforcing the consistency of confusion matrices under different data augmentations to encourage its invariance to distribution perturbations. Experiments on 2D vision and 3D vision benchmarks show that the CC-Loss performs competitively in different mainstream DA setups. Code is available at https://github.com/thuml/Transfer-Learning-Library.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call