Unsupervised domain adaptation transfers empirical knowledge from a label-rich source domain to a fully unlabeled target domain with a different distribution. A core idea of many existing approaches is to reduce the distribution divergence between domains. However, they focused only on part of the discrimination, which can be categorized into optimizing the following four objectives: reducing the intraclass distance between domains, enlarging the interclass distances between domains, reducing the intraclass distances within domains, and enlarging the interclass distances within domains. Moreover, because few methods consider multiple types of objectives, the consistency of data representations produced by different types of objectives has not yet been studied. In this paper, to address the above issues, we propose a zeroth- and first-order difference discrimination (ZFOD) approach for unsupervised domain adaptation. It first optimizes the above four objectives simultaneously. To improve the discrimination consistency of the data across the two domains, we propose a first-order difference constraint to align the interclass differences across domains. Because the proposed method needs pseudolabels for the target domain, we adopt a recent pseudolabel generation method to alleviate the negative impact of imprecise pseudolabels. We conducted an extensive comparison with nine representative conventional methods and seven remarkable deep learning-based methods on four benchmark datasets. Experimental results demonstrate that the proposed method, as a conventional approach, not only significantly outperforms the nine conventional comparison methods but is also competitive with the seven deep learning-based comparison methods. In particular, our method achieves an accuracy of 93.4% on the Office+Caltech10 dataset, which outperforms the other comparison methods. An ablation study further demonstrates the effectiveness of the proposed constraint in aligning the objectives.
Read full abstract