Abstract
Restricted by the cost of generating labels for training, semi-supervised methods have been applied to semantic segmentation tasks and have achieved varying degrees of success. Recently, the semi-supervised learning method has taken pseudo supervision as the core idea, especially self-training methods that generate pseudo labels. However, pseudo labels are noisy. In semi-supervised learning, as training progresses, the model needs to focus on more semantic classes and bias towards the newly learned classes. Moreover, due to the limitation of the amount of labeled data, it is difficult for the model to "stabilize" the learned knowledge. That raise the issue of the model forgetting previously learned knowledge. Based on this new view, we point out that alleviating "catastrophic forgetting" of the model is beneficial for enhancing the quality of pseudo labels, and propose a pseudo label enhancement strategy. In this strategy, the pseudo labels generated by the previous model are used to rehearse the previous knowledge. Additionally, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the previous and current models. We evaluate our scheme on two general semi-supervised semantic segmentation benchmarks, and both achieve state-of-the-art performance. Our codes are released at https://github.com/wing212/DMT-PLE.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.