Unsupervised multi-class domain adaptation (multi-class UDA) has recently been proposed to fill the gap between empirically practical methods for multi-class classification and well-founded theory with the setting of binary classification. Nevertheless, the multi-class UDA methods use model predictions to characterize the disagreement of multi-class scoring hypotheses, which is used to optimize the divergence between domain distributions. Such self-training manner may bring inaccurate model predictions, which would damage the target structure due to the absence of labels of the target domain, leading to sub-optimal performance. On the other hand, this disagreement between multi-class scoring hypotheses does not involve the relationships among all of the multiple classes. It causes that multi-class UDA cannot properly connect the advanced practical UDA methods that consider class-conditional distribution alignment. Thus, we propose to exploit the target structure information and then incorporate it into multi-class UDA to achieve class-conditional distribution alignment. We theoretically and experimentally explain the importance of accurate target structure information to reduce the expected error on the target domain. Notably, our method achieves state-of-the-art results on three commonly-used benchmarks with different scales. In addition, using the target structure information, we propose a variant to cope with noisy open-world source domains such as noisy labels and out-of-distribution samples, enhancing the robustness of our method. The source code is available at https://github.com/jingzhengli/Multi_Class_UDA .
Read full abstract