Abstract

Unsupervised domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled target domain under the distribution shift. Thanks to the strong representation ability of deep neural networks, recent remarkable achievements in UDA resort to learning domain-invariant features. Intuitively, the goal is that a good feature representation and the hypothesis learned from the source domain can generalize well to the target domain. However, the learning processes of domain-invariant features and source hypotheses inevitably involve domain-specific information that would degrade the generalizability of UDA models on the target domain. The lottery ticket hypothesis proves that only partial parameters are essential for generalization. Motivated by it, we find in this paper that only partial parameters are essential for learning domain-invariant information. Such parameters are termed transferable parameters that can generalize well in UDA. In contrast, the rest parameters tend to fit domain-specific details and often cause the failure of generalization, which are termed untransferable parameters. Driven by this insight, we propose Transferable Parameter Learning (TransPar) to reduce the side effect of domain-specific information in the learning process and thus enhance the memorization of domain-invariant information. Specifically, according to the distribution discrepancy degree, we divide all parameters into transferable and untransferable ones in each training iteration. We then perform separate update rules for the two types of parameters. Extensive experiments on image classification and regression tasks (keypoint detection) show that TransPar outperforms prior arts by non-trivial margins. Moreover, experiments demonstrate that TransPar can be integrated into the most popular deep UDA networks and be easily extended to handle any data distribution shift scenarios.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.