Abstract

Multi-source Unsupervised Domain Adaptation (MUDA) is an approach aiming to transfer the knowledge obtained from multiple labeled source domains to an unlabeled target domain. In this paper, we propose a novel self-training method for MUDA, which includes pseudo label-oriented coteaching and pseudo label decoupling that are attempted for the pseudo label rectification-based MUDA for semantic segmentation. Existing ensemble-based self-training methods which are well-known approaches for MUDA use pseudo labels made from the ensemble of the predictions of multiple models to transfer the knowledge of source domains to the target domain. In these methods, information from multiple models can be contaminated, or errors from incorrect pseudo labels can be propagated. On the other hand, the proposed pseudo label-oriented coteaching trains multiple models by using pseudo labels from the peer model without any integration of pseudo labels. Simultaneously, the pseudo label decoupling method is proposed for rectification of pseudo labels, which updates the models with two pseudo labels only if they disagree. It also alleviates the problem of class imbalance in semantic segmentation, in which dominant classes lead the update for training. The effects of the proposed pseudo label-oriented coteaching and pseudo label decoupling on the performance of semantic segmentation were verified by extensive experiments. The proposed method achieved the best semantic segmentation accuracy compared with the benchmark methods. In addition, we confirmed that the prediction accuracy of small objects was greatly improved by the proposed pseudo label rectification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call