Abstract

Conventional machine learning relies on two presumptions: (1) the training and testing datasets follow the same independent distribution, and (2) an adequate quantity of samples is essential for achieving optimal model performance during training. Nevertheless, meeting these two assumptions can be challenging in real-world scenarios. Domain adaptation (DA) is a subfield of transfer learning that focuses on reducing the distribution difference between the source domain (Ds) and target domain (Dt) and subsequently applying the knowledge gained from the Ds task to the Dt task. The majority of current DA methods aim to achieve domain invariance by aligning the marginal probability distributions of the Ds. and Dt. Recent studies have pointed out that aligning marginal probability distributions alone is not sufficient and that alignment of conditional probability distributions is equally important for knowledge migration. Nonetheless, unsupervised DA presents a more significant difficulty in aligning the conditional probability distributions because of the unavailability of labels for the Dt. In response to this issue, there have been several proposed methods by researchers, including pseudo-labeling, which offer novel solutions to tackle the problem. In this paper, we systematically analyze various pseudo-labeling algorithms and their applications in unsupervised DA. First , we summarize the pseudo-label generation methods based on the single and multiple classifiers and actions taken to deal with the problem of imbalanced samples. Second, we investigate the application of pseudo-labeling in category feature alignment and improving feature discrimination. Finally, we point out the challenges and trends of pseudo-labeling algorithms. As far as we know, this article is the initial review of pseudo-labeling techniques for unsupervised DA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call