Abstract

An important challenge of unsupervised domain adaptation (UDA) is how to sufficiently utilize the structure and information of the data distribution, so as to exploit the source domain knowledge for a more accurate classification of the unlabeled target domain. Currently, much research work has been devoted to UDA. However, existing works have mostly considered only distribution alignment or learning domain invariant features by adversarial techniques, ignoring feature processing and intra-domain category information. To this end, we design a new cross-domain discrepancy metric, namely joint distribution for maximum mean discrepancy (JD-MMD), and propose a deep unsupervised domain adaptation learning method, namely joint bi-adversarial learning for unsupervised domain adaptation (JBL-UDA). Specifically, JD-MMD measures cross-domain divergence in terms of both discrepancy and relevance by preserving cross-domain joint distribution discrepancy, as well as their class discriminability. Then, with such divergence measure, JBL-UDA models with two learning modalities, one is founded by the bi-adversarial learning from domains and classes implicitly, while the other explicitly addresses domains and classes alignment via the JD-MMD metric. Besides, JBL-UDA explores structural prior knowledge from data classes and domains to generate class-discriminative and domain-invariant representations. Finally, extensive evaluations exhibit state-of-the-art accuracy of the proposed methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call