Abstract

Unsupervised domain adaptation (UDA) aims to utilize knowledge from a related but inconsistently distributed source domain for target model training. The challenge of significant domain divergence, known as domain drift, can impact model performance. Althogh domain drift can lead to a decrease in the performance of the model in the target domain, effectively utilizing the relational information between unlabeled data can alleviate this issue. The new method of UDA, namely, confidence-diffusion instance contrastive learning (CDICL) in proposed in this article mines the deep information of the data in the target data by analyzing the pseudo labels of the target sample pairs. In this article, we propose a novel approach to unsupervised domain adaptation by constructing an information perception space via instance contrastive learning. This space incorporates samples from both source and target domains. Each target sample is assigned a pseudo-label and a corresponding confidence score, which is based on the category prior of the source samples. Throughout the training process, we continuously update the confidence associated with each pseudo-label. The classification task is performed based on the relationship of the target sample with the source domain samples. This method ensures that sample pairs belonging to the same class, regardless of whether they originate from the source or target domain, are drawn closer while those from different classes are simultaneously distanced. This approach offers a fresh perspective on unsupervised domain adaptation. The effectiveness of CDICL in UDA tasks has been demonstrated through experiments on three datasets: Office-31, Office-Home and VisDA-2017.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call