Abstract

The bi-classifier paradigm is widely adopted as an adversarial method to address domain shift challenge in unsupervised domain adaptation (UDA) by evenly training two classifiers. In this paper, we report that although the generalization ability of the feature extractor can be strengthened by the two even classifiers, the decision boundaries of the two classifiers would be shrank to the source domain in the adversarial process, which weakens the discriminative ability of the learned model. To tame this dilemma, we disentangle the function of the two classifiers and introduce uneven bi-classifier learning for domain adaptation. Specifically, we leverage the F-norm (Frobenius Norm) of classifier predictions instead of the classifier disagreement to achieve adversarial learning. By this way, our feature extractor can be adversarially trained with a single classifier and the other classifier is used for preserving the target-specific decision boundaries. The proposed uneven bi-classifier learning protocol can simultaneously enhance the generalization ability of the feature extractor and expand the decision boundary of the target classifier. Extensive experiments on large-scale datasets prove that our method can significantly surpass previous domain adaptation methods, even with only a single classifier being involved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.