Abstract

Great success has been achieved in the area of unsupervised domain adaptation, which learns to generalize from the labeled source domain to the unlabeled target domain. However, most of the existing techniques can only handle the closed-set scenario, which requires both the source domain and the target domain to have a shared category label set. In this work, we propose a two-stage method to deal with the more challenging task of open set domain adaptation, where the target domain contains categories unseen to the source domain. Our first stage formulates the alignment of two domains as a semi-supervised clustering problem, and initially associates each target-domain sample x <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t</sup> ∈ X <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t</sup> with a source-domain category label ℓ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">s</sup> ∈ L <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">s</sup> . To this end, we use the self-training strategy to learn a teacher network and a student network, both of which adopt the self-attention mechanism. Our second stage refines the resulting clusters by identifying the negative associations (x <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t</sup> , ℓ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">s</sup> ) and labeling the involved xt as unknown. For this purpose, we investigate the compatibility of each association by replacing the self-attention maps in the last convolutional layers with the newly proposed category attention maps (CAMs), which locate the informative feature pixels for a given category. Experimental results on three public datasets show the effectiveness and robustness our method in adaptation across various domain pairs.

Highlights

  • At present, most of the successful deep learning techniques [1], [2] relies on the availability of a large set of labeled training samples, which follow the same distribution with the testing samples

  • The second stage refines the associations by accepting those positive ones and rejecting the negative ones. This stage introduces a new concept of category attention map (CAM) based on the observation that the locations of the informative feature pixels are similar for category-sharing samples in the highest layers of a network

  • We introduce a new concept of category attention map (CAM), and use it to identify the mis-clustered target samples Xknt inside the kth cluster

Read more

Summary

INTRODUCTION

Most of the successful deep learning techniques [1], [2] relies on the availability of a large set of labeled training samples, which follow the same distribution with the testing samples. Compared with the previous methods that consider the global alignment [17], [18], our method aligns the representation of each target sample with one specific known source-domain category This is achieved by our proposed training procedure, which iteratively updates the pseudo labels of the target samples. The second stage refines the associations by accepting those positive ones and rejecting the negative ones For this purpose, this stage introduces a new concept of category attention map (CAM) based on the observation that the locations of the informative feature pixels are similar for category-sharing samples in the highest layers of a network. We reject an association (xt, s) and label the involved target sample xt as unknown, if it is not compatible with the CAMs of category s, which replace the self-attention maps in the learning procedure

RELATED WORK
ALIGNMENT OF REPRESENTATIONS ACROSS
CAM FOR NEGATIVE ASSOCIATION REJECTION
Method
RESULTS
ABLATION STUDY
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.