Abstract

Restricted by the cost of generating labels for training, semi-supervised methods have been applied to semantic segmentation tasks and have achieved varying degrees of success. Recently, the semi-supervised learning method has taken pseudo supervision as the core idea, especially self-training methods that generate pseudo labels. However, pseudo labels are noisy. In semi-supervised learning, as training progresses, the model needs to focus on more semantic classes and bias towards the newly learned classes. Moreover, due to the limitation of the amount of labeled data, it is difficult for the model to "stabilize" the learned knowledge. That raise the issue of the model forgetting previously learned knowledge. Based on this new view, we point out that alleviating "catastrophic forgetting" of the model is beneficial for enhancing the quality of pseudo labels, and propose a pseudo label enhancement strategy. In this strategy, the pseudo labels generated by the previous model are used to rehearse the previous knowledge. Additionally, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the previous and current models. We evaluate our scheme on two general semi-supervised semantic segmentation benchmarks, and both achieve state-of-the-art performance. Our codes are released at https://github.com/wing212/DMT-PLE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call