Abstract

Unsupervised domain adaptation (UDA) deals with the problem of transferring knowledge from a labeled source domain to an unlabeled target domain when the two domains have distinct data distributions. Therefore, the purpose of domain adaptation is to mitigate the distribution divergence between the two domains. Many existing UDA methods only use the traditional batch normalization layer, but this may lead to a large number of feature redundancy and lead performance degradation. In this paper, we introduce a novel deep learning paradigm called feature redundancy in UDA to enhance adaptation ability. Specifically, we first show that feature redundancy also exists on unsupervised domain adaptation (UDA), which has been ignored by most previous efforts. We utilize feature similarity as a metric to measure feature redundancy and then analyze the relationship between uniform feature spectrum and minimal feature similarity. Based on this relationship, we intend to reduce cross-domain feature redundancy for UDA by making the distribution of feature spectrum uniforms in a bi-level way. For the first level, we propose a cross-domain batch normalization with the whitening module (xBN) to ensure compact domain-specific features and learn domain-invariant features at the same time. With the domain-specific features from the first level that paves a way, on the second level, we suggest an alternative orthogonal regularizer (OR) that can make the distribution of the feature spectrum more uniform, thus domain-invariant feature redundancy is mitigated. Such a bi-level mechanism greatly reduces the feature redundancy for UDA. To evaluate the efficacy of the proposed bi-level mechanism, we plug those two novel modules (i.e., xBN and OR) into convolutional neural networks (CNNs) to form our UDA model and also conduct the corresponding empirical evaluations on five cross-domain object recognition benchmarks including both classical and large-scale image datasets. Experimental results show that the proposed UDA model could achieve state-of-the-art performance both in quantity and quality. Our source codes will be released after publication.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call