Abstract

Conventional unsupervised domain adaptation (UDA) methods often presuppose the existence of labeled source domain samples while adapting the source model to the target domain. Nevertheless, this premise is not always tenable in the context of source-free UDA (SFUDA) attributed to data privacy considerations. Some existing methods address this challenging SFUDA problem by self-supervised learning. But inaccurate pseudo-labels are always unavoidable to degrade the performance of the target model among these methods. Therefore, we propose a promising SFUDA method, namely Generation, Division and Training (GDT) which aims to promote the reliability of pseudo-labels for self-supervised learning and encourage similar features to have closer predictions than dissimilar ones by contrastive learning. Specifically in our GDT method, we first refine pseudo-labels with deep clustering for target samples and then split them into reliable samples and unreliable samples. After that, we adopt self-supervised learning and information maximization for reliable samples training. And for unreliable samples, we conduct contrastive learning via the perspective of similarity and disparity to attract similar samples and repulse dissimilar samples, which helps pull the similar features closed and push the dissimilar features away, leading to efficient feature clustering. Thorough experimentation on three benchmark datasets substantiates the excellence of our proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call