Abstract

Unsupervised domain adaptation (UDA) aims to reapply the classifier to be ever-trained on a labeled source domain to a related unlabeled target domain. Recent progress in this line has evolved with the advance of network architectures from convolutional neural networks (CNNs) to transformers or both hybrids. However, this advance has to pay the cost of high computational overheads or complex training processes. In this paper, we propose an efficient alternative hybrid architecture by marrying transformer to contextual convolution (TransConv) to solve UDA tasks. Different from previous transformer based UDA architectures, TransConv has two special aspects: (1) reviving the multilayer perception (MLP) of transformer encoders with Gaussian channel attention fusion for robustness, and (2) mixing contextual features to highly efficient dynamic convolutions for cross-domain interaction. As a result, TransConv enables to calibrate interdomain feature semantics from the global features and the local ones. Experimental results on five benchmarks show that TransConv attains remarkable results with high efficiency as compared to the existing UDA methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.