Abstract
Recent advancements in semi-supervised semantic segmentation have demonstrated the effectiveness of utilizing pseudo labels supervision to mitigate the limitations of pixel-wise annotations. However, pseudo-labels generated using self-training techniques typically contain a significant amount of noise, which can impede the training process of the supervised model. In this study, we identify low- and high-level semantic errors as the two key factors that hinder the accuracy of pseudo labels. To fully exploit the potential of pseudo labels, we introduce a novel semi-supervised framework named Twin Pseudo-training (TPseudo), which employs a consistency and disagreement collaboration strategy. Specifically, we correct pseudo labels with a False-positive Filter (FPF) to reduce high-level semantic noise and refine low-level semantic biases using a Semantic Error Detector (SED). Lastly, we design a Self-Adaptive Weight (SAW) loss function based on a disagreement between two predictions to exploit each pixel of pseudo labels. Experimental results on the standard benchmarks PASCAL VOC2012 and Cityscapes demonstrate the efficacy of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.