Abstract

Domain adaptation (DA) refers to generalize a learning technique across the source domain and target domain under different distributions. Therefore, the essential problem in DA is how to reduce the distribution discrepancy between the source and target domains. Typical methods are to embed the adversarial learning technique into deep networks to learn transferable feature representations. However, existing adversarial related DA methods may not sufficiently minimize the distribution discrepancy. In this article, a DA method minimum adversarial distribution discrepancy (MADD) is proposed by combining feature distribution with adversarial learning. Specifically, we design a novel divergence metric loss, named maximum mean discrepancy based on conditional entropy (MMD-CE), and embed it in the adversarial DA network. The proposed MMD-CE loss can address two problems: 1) the misalignment from different class distributions between domains and 2) the equilibrium challenge issue in adversarial DA. Comparative experiments on Office-31, ImageCLEF-DA, and Office-Home data sets with state-of-the-art methods show that our method has some advantageous performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call