Abstract

Unsupervised domain adaptation aims to align the distributions of data in source and target domains, as well as assign the labels to data in the target domain. In this paper, we propose a new method named Unsupervised Domain Adaptation based on Pseudo-Label Confidence (UDA-PLC). Concretely, UDA-PLC first learns a new feature representation by projecting data of source and target domains into a latent subspace. In this subspace, the distribution of data in two domains are aligned and the discriminability of features in both domains is improved. Then, UDA-PLC applies Structured Prediction (SP) and Nearest Class Prototype (NCP) to predicting pseudo-labels of data in the target domain, and it takes a fraction of samples with high confidence rather than all the pseudo-labeled target samples into next iterative learning. Finally, experimental results validate that the proposed method outperforms several state-of-the-art methods on three benchmark data sets.

Highlights

  • INTRODUCTIONThe same year, Li et al [21] propose the Heterogeneous Domain Adaptation through Progressive Alignment, it projects the source and target domains into a common subspace to reduce the feature discrepancy and distribution divergence by progressive alignment

  • IN recent years, deep learning architecture offers a powerful tool to address AI-based tasks in different fields

  • We focus on the unsupervised domain adaptation (UDA) which is considered as the most challenging task

Read more

Summary

INTRODUCTION

The same year, Li et al [21] propose the Heterogeneous Domain Adaptation through Progressive Alignment, it projects the source and target domains into a common subspace to reduce the feature discrepancy and distribution divergence by progressive alignment. The performance of the classifier is extremely poor that only a small fraction of the target samples can be correctly classified, the rest are misclassified [26], [29], [30] It is because all these methods assign labels to the whole samples of the target domain, integrate all the target data with the source data into iterative learning without considering the confidence of these pseudo-labels [4], [19]. The main contributions of this paper can be summarized as following two-folds: (1) The structural information of the data is preserved to predict the pseudo-labels of target domain.

MODELS AND ALGORITHMS
DIMENSIONALITY REDUCTION
DOMAIN DISCRIMINATIVE INFORMATION PRESERVATION
ITERATIVE LEARNING BY SELECTING PSEUDO-LABELING OF TARGET DOMAIN
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call