Abstract

Domain adaptation aims to improve the performance of the classifier in the target domain by reducing the difference between the two domains. Domain shifts usually exist in both marginal distribution and conditional distribution, and their relative importance varies with datasets. Moreover, there is an influence between marginal distribution distance and conditional distribution distance. However, joint domain adaptation approaches rarely consider those. Existing dynamic distribution alignment methods require a feature discriminator, and they need to train a subdomain discriminator for each class. Besides, they don't think about the interaction between the two distribution distances. In this article, we propose a dynamic joint domain adaptation approach, namely Joint Domain Adaptation Based on Adversarial Dynamic Parameter Learning (ADPL), to deal with the above problems. Both marginal distribution alignment and conditional distribution alignment can be implemented by adversarial learning. The dynamic algorithm can keep a balance between marginal and conditional distribution alignment with only two domain discriminators. In addition, the dynamic algorithm takes the influence between the two distribution distances into consideration. Compared with several advanced domain adaptation methods on both text and image datasets, all classification experiments and extensive comparison experiments demonstrate that ADPL has higher learning performance of classification and less running time. This reveals that ADPL outperforms the state-of-the-art domain adaptation approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call