Abstract
Unsupervised domain adaptive (UDA) is a widely used approach in machine learning strategies that helps move information from a source domain with rich supervised data to a target domain with limited supervised data. In recent years, self-training derived from semi-supervised learning provides a powerful tool for unsupervised domain adaptive by training unlabeled target domain data using pseudo-tags. However, in the case of domain shift, the pseudo-labels generated by standard self-training are no longer reliable due to the influence of the class-distribution shift in a specific domain, resulting in the noise of the pseudo-label error blocking the upper bound of the accuracy. To solve these problems, a domain adaptive method based on regularization joint self-training (RJS) is proposed in this paper. RJS consists of two key steps: First, the generation and optimization of pseudo-tags are trained jointly on the source domain to achieve automatic cross-domain generalization of pseudo-tags. Secondly, the regularization method is introduced as a confidence rule for false labels in initial model training, which implicitly prevents the model from remembering false labels. To verify the effectiveness of the proposed method, experiments were performed on VisDA-2017, Office-31, Office-Home, and DomainNet datasets, and the accuracy rates reached 87.1%, 89.3%, 76.9%, and 57.3%, respectively. In addition, the average classification performance of RJS on VisDA-2017 improved by about 12.6% compared to FixMatch, the standard self-training method. Experimental results demonstrate that RJS achieves state-of-the-art performance on multiple standard UDA benchmarks and confirm its effectiveness in the presence of domain-specific class distribution shifts.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have