Abstract

Unsupervised domain adaptation aims to transfer the knowledge learned from the labeled source domain to the unlabeled target domain, thereby improving the classification performance of the target domain. Recent methods use contrastive learning to optimize this task, however, these methods only focus on contrastive learning for aspects of domain alignment, which do not actively serve the classification task, resulting in suboptimal solution. To this end, we propose Task-oriented Contrastive Learning for Unsupervised Domain Adaptation (TOCL), which addresses the inconsistency problem of optimizing contrastive learning and classification tasks. We perform feature weighting on the source domain images and extract the task-related features containing semantic information after data augmentation as the source domain positive and negative sample pairs for contrastive learning. At the same time, a delimitation discriminator is introduced to maximize the output difference between two types of random data-augmented samples of the target domain to detect data-augmented samples that are not conducive to classification, and then minimize the difference through the feature generator to generate effective data-augmented features (good for classification). Our extensive experiments on several public datasets show that TOCL has good adaptability and effectiveness, which improves the accuracy of classification results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call