Abstract

A classifier trained on one dataset rarely works on other datasets obtained under different conditions because of domain shifting. Such a problem is usually solved by domain adaptation methods. In this paper, we propose a novel unsupervised domain adaptation (UDA) method based on Interchangeable Batch Normalization (InterBN) to fuse different channels in deep neural networks for adversarial domain adaptation.Specifically, we first observe that the channels with small batch normalization scaling factor have less influence on the whole domain adaption, followed by a theoretical proof that the scaling factors for some channels will definitely come close to zero when imposing a sparsity regularization. Then, we replace the channels that have smaller scaling factors in the source domain with the mean of the channels which have larger scaling factors in the target domain or vice versa. Such a simple but effective channel fusion scheme can drastically increase the domain adaption ability.Extensive experimental results show that our InterBN significantly outperforms the current adversarial domain adaptation methods by a large margin on four visual benchmarks. In particular, InterBN achieves a remarkable improvement of 7.7% over the conditional adversarial adaptation networks (CDAN) on VisDA-2017 benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call