Abstract
Domain adaptation (DA) refers to generalize a learning technique across the source domain and target domain under different distributions. Therefore, the essential problem in DA is how to reduce the distribution discrepancy between the source and target domains. Typical methods are to embed the adversarial learning technique into deep networks to learn transferable feature representations. However, existing adversarial related DA methods may not sufficiently minimize the distribution discrepancy. In this article, a DA method minimum adversarial distribution discrepancy (MADD) is proposed by combining feature distribution with adversarial learning. Specifically, we design a novel divergence metric loss, named maximum mean discrepancy based on conditional entropy (MMD-CE), and embed it in the adversarial DA network. The proposed MMD-CE loss can address two problems: 1) the misalignment from different class distributions between domains and 2) the equilibrium challenge issue in adversarial DA. Comparative experiments on Office-31, ImageCLEF-DA, and Office-Home data sets with state-of-the-art methods show that our method has some advantageous performances.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Cognitive and Developmental Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.