Abstract

Domain adaptation has emerged as a crucial technique to address the problem of domain shift, which exists when applying an existing model to a new population of data. Adversarial learning has made impressive progress in learning a domain invariant representation via building bridges between two domains. However, existing adversarial learning methods tend to only employ a domain discriminator or generate adversarial examples that affect the original domain distribution. Moreover, little work has considered confident continuous learning using an existing source classifier for domain adaptation. In this paper, we develop adversarial continuous learning in a unified deep architecture. We also propose a novel correlated loss to minimize the discrepancy between the source and target domain. Our model increases robustness by incorporating high-confidence samples from the target domain. The transfer loss jointly considers the original source image and transfer examples in the target domain. Extensive experiments demonstrate significant improvements in classification accuracy over the state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.