Abstract

Unsupervised domain adaptation (UDA) attempts to learn domain invariant representations and has achieved significant progress, whereas self-training-based UDA methods have shown powerful performance. However, due to the domain gap, pseudo-labels selected through high confidence scores or uncertainty inevitably contain noise, leading to inaccurate predictions. To address this issue, we propose a novel risk-consistent training method. Specifically, both clean and noisy classifiers are introduced to estimate the noise transition matrix. The clean classifier is exploited to assign pseudo-labels for target data in each iteration. The noisy classifier is then trained with noisy target samples, and the optimal parameters are obtained through a closed-form solution. Heuristically, we also pre-train a domain predictor to select a target-like source example for the noise transition matrix estimation. In addition, we design an uncertainty-guided regularization to generate soft pseudo-labels and avoid overconfident predictions. Extensive experimental results show the effectiveness of our method, and state-of-the-art performance has been achieved. Codes are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/feifei-cv/RCE.</uri>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call