Abstract
Self-iterative strategies consist of labeling (unlabeled) target data using a hypothesis built from the available source examples in a step-by-step fashion such that at each step the classifier is learned on a modified sample that includes the pseudo-labeled (or semilabeled) examples from the previous step. In this scenario, a new classifier is trained in way that allows us to take into account the information provided by both domains with a stopping point corresponding to a state where all target instances are labeled. This iterative process, taking its origins from self-training or cotraining strategies, has led to several strategies for unsupervised domain adaptation with the most notorious examples being given by the DASVM algorithm and the approach of Perez and Sánchez-Montañés based on the expectation–maximization algorithm. More recently, several approaches have also extended these ideas based on different algorithmic principles with regularization information coming from pseudo-labeled data.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have