Abstract

Deep convolutional networks (CNNs) are able to learn robust representations and empower many computer vision tasks such as object recognition. However, when applying CNNs to industrial visual systems, they usually suffer from domain shift that exists between the training data and testing data. Such shift can be caused by different environment, types of cameras and exteriors of objects, leading to degrading performance and hindering the practical applications of CNNs in real-world visual recognition. To tackle this problem, Adversarial domain adaptation (ADA) reduces such shift by min–max optimization. However, current CNNs with ADA are hard to train due to training instability of adversarial network. In this paper, we propose a unified and easy-to-train domain adaptation framework, namely Attention-based Domain-confused Adversarial Domain ADaptation (AD3). Our method leverages both adversarial and statistical domain alignment, allows flexibility for source and target feature extractors and simultaneously performs feature-level and attention-level alignment. The statistical domain alignment promotes and stabilizes the adversarial domain learning, which reduces the manual work of tuning the hyper-parameters. The experimental results validate that our method performs better adaptation and faster convergence for adversarial domain learning than existing state-of-the-art methods on DIGITS, Office-31 and VisDA domain adaptation benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call