Abstract

Unsupervised domain adaptation (UDA) aims to transfer and adapt knowledge from a labeled source domain to an unlabeled target domain. Traditionally, geometry-based alignment methods, e.g., Orthogonal Procrustes Alignment (OPA), formed an important class of solutions to this problem. Despite their mathematical tractability, they rarely produce effective adaptation performance with the recent benchmarks. Instead, state-of-the-art approaches rely on sophisticated distribution alignment strategies such as adversarial training. In this paper, we show that, conventional OPA, when coupled with powerful deep feature extractors and a novel bi-level optimization formulation, is indeed an effective choice for handling challenging distribution shifts. When compared to existing UDA methods, our approach offers the following benefits: <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">computational efficiency</i> : Through the isolation of alignment and classifier training steps during adaptation, and the use of deep OPA, our approach is computationally very effective (typically requiring only 700 K parameters more than the base feature extractor as compared to millions of extra parameters required by state-of-the-art UDA baselines); (ii) <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">data efficiency</i> : Our approach does not require updating our feature extractor during adaptation and hence can be effective even with limited target data; (iii) <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">improved generalization</i> : The resulting models are intrinsically well-regularized and demonstrate effective generalization even in the challenging partial DA setting, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e</i> ., target domain contains only a subset of the classes observed in the source domain.; and (iv) <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">incremental training</i> : Our approach allows progressive adaptation of models to novel domains (unseen during training) without requiring retraining of the model from scratch.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.