Abstract

Unsupervised domain adaptation aims to utilize the knowledge learned from source domain with labels to make predictions for target domain without labels. Most of existing methods are committed to aligning the global distribution information to narrow down the domain discrepancy between source and target domain, which show a promising achievement. However, these domain alignment methods can not completely eliminate domain shift because of the loose manifold structure, so that the adjacent samples located in a high-density data region from different classes are easily misclassified by the hyperplane learned from source domain. Another problem is that these methods only align the distributions in the domain level, while neglecting class-level distribution information, which may cause class misalignment among domains. In this paper, we propose to learn discriminative feature and align domain in class level which can learn better source discriminative feature representations and benefit domain alignment in the class level. The discriminative feature learning strategy is able to obtain a tight manifold structure and then promote a low-density separation between classes. Then, to perform fine-grained domain alignment and learn more transferable features, a class-aware domain alignment approach is proposed to align samples from the same class in source domain and target domain. Experiments on two datasets show that our proposal can improve the performance of deep unsupervised domain adaptation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call