Abstract

Deep neural networks have been successfully applied in domain adaptation which uses the labeled data of source domain to supplement useful information for target domain. Deep Adaptation Network (DAN) is one of these efficient frameworks, it utilizes Multi-Kernel Maximum Mean Discrepancy (MK-MMD) to align the feature distribution in a reproducing kernel Hilbert space. However, DAN does not perform very well in feature level transfer, and the assumption that source and target domain share classifiers is too strict in different adaptation scenarios. In this paper, we further improve the adaptability of DAN by incorporating Domain Confusion (DC) and Classifier Adaptation (CA). To achieve this, we propose a novel domain adaptation method named C2DAN. Our approach first enables Domain Confusion (DC) by using a domain discriminator for adversarial training. For Classifier Adaptation (CA), a residual block is added to the source domain classifier in order to learn the difference between source classifier and target classifier. Beyond validating our framework on the standard domain adaptation dataset office-31, we also introduce and evaluate on the Comprehensive Cars (CompCars) dataset, and the experiment results demonstrate the effectiveness of the proposed framework C2DAN.

Highlights

  • In recent years, deep learning has made great achievements in a large number of computer vision tasks, such as image recognition [1,2,3], object detection [4,5], fine-grained classification [6,7], semantic segmentation [8,9] and so on

  • The best combination of Multi-Kernel Maximum Mean Discrepancy (MK-Maximum Mean Discrepancy (MMD)), Domain Confusion (DC) and Classifier Adaptation (CA) in different scenarios is obtained through experiments on office-31 and CompCars dataset, the experimental results show that our improved method C2 Deep Adaptation Network (DAN) surpass the performance of DAN

  • When we combine MK-MMD, domain confusion and classifier adaptation, we explore the which is impossible for domain adaptation problems, so in this paper the initial network adopts contribution’s difference with different domain adaptation metric to different scenarios

Read more

Summary

Introduction

Deep learning has made great achievements in a large number of computer vision tasks, such as image recognition [1,2,3], object detection [4,5], fine-grained classification [6,7], semantic segmentation [8,9] and so on. Sensors 2020, 20, 3606 minimize the domain discrepancy by using a distance metric such as Maximum Mean Discrepancy (MMD) [17], CORAL [18,19] and Kullback-Leibler divergence [20], among which MMD-based methods are widely used In this sort of methods, the difference between source and target domains is usually reduced by optimizing the MMD in Reproducing Kernel Hilbert Space (RKHS), and the feature representation with domain invariance needs to be learned. The best combination of MK-MMD, DC and CA in different scenarios is obtained through experiments on office-31 and CompCars dataset, the experimental results show that our improved method C2 DAN surpass the performance of DAN.

Related Work
DAN: Improved Deep Adaptive Network
Domain Confusion
Classifier
Experiment Results and Analysis
Results andfrom
Experiment Procedure
Analysis of Weights
Accuracy
The Introduction of the Dataset
Experiments Details and the Result
Method
Accuracy and Analysis of Various Categories
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call