Unsupervised domain adaptation (DA) enables a classifier trained on data from one domain to be applied to data from another without labels. Given that the key to transferring a classifier across domains is to mitigate the data distribution mismatch for each class, most previous works completely or partially focus on global distribution matching across domains. The global data space, however, can be complicated, which makes modeling the global distribution difficult. To mitigate this problem, we present a novel unsupervised DA framework where the DA problem is addressed by proposing a robust class-wise matching strategy. Specifically, through minimizing a maximum mean discrepancy-based class-wise fisher discriminant across domains, this framework jointly optimizes two modules: a transferable feature learning module that reduces the distribution discrepancy between the same classes as well as increasing the distribution discrepancy between different classes across domains by a linear projection, and a robust classifier that exploits both the supervised information in source domain and the unsupervised low-rank property of target domain. In experiments on three DA benchmark data sets, the proposed framework shows the state-of-the-art performance.
Read full abstract