Abstract

Unsupervised domain adaptation is an appealing technique to learn robust classifiers for unlabeled target domain by borrowing knowledge from well-established source domain. However, previous works mainly suffer from two limitations: 1) the classifier trained on labeled source data may be prone to overfitting the source distribution, lowering its performance on the target domain; 2) the adaptation process will be misled by conditional distribution matching using hard pseudo labels of target samples. This paper presents a Dual-Level Adaptive and Discriminative (DLAD) classifier learning framework, in which transfer classifier and distribution adaptation can be mutually beneficial for effective knowledge transfer. Specifically, we aim to achieve a domain-level adaptive classifier by considering structural risk minimization (SRM) on both domains and performing weighted distribution adaptation, which facilitates joint classifier learning in a semi-supervised manner. To further achieve a class-level discriminative classifier, we explicitly leverage unlabeled target data to promote classifier learning based on class probabilities, which refines the decision boundary to be more discriminative for unlabeled target data. To the best of our knowledge, DLAD is the first attempt to consider the principle of SRM on the target domain, which significantly boosts the discriminative power of transfer classifier and yields a tighter generalization bound. Experimental evaluations on several standard cross-domain datasets show that DLAD significantly outperforms other competitive methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call