Abstract

Labeling pixel-level masks for fine-grained semantic segmentation tasks, e.g., human parsing, remains a challenging task. The ambiguous boundary between different semantic parts and those categories with similar appearances are usually confusing for annotators, leading to incorrect labels in ground-truth masks. These label noises will inevitably harm the training process and decrease the performance of the learned models. To tackle this issue, we introduce a noise-tolerant method in this work, called Self-Correction for Human Parsing (SCHP), to progressively promote the reliability of the supervised labels as well as the learned models. In particular, starting from a model trained with inaccurate annotations as initialization, we design a cyclically learning scheduler to infer more reliable pseudo masks by iteratively aggregating the current learned model with the former sub-optimal one in an online manner. Besides, those correspondingly corrected labels can in turn to further boost the model performance. In this way, the models and the labels will reciprocally become more robust and accurate during the self-correction learning cycles. Our SCHP is model-agnostic and can be applied to any human parsing models for further enhancing their performance. Extensive experiments on four human parsing models, including Deeplab V3+, CE2P, OCR and CE2P+, well demonstrate the effectiveness of the proposed SCHP. We achieve the new state-of-the-art results on 6 benchmarks, including LIP, Pascal-Person-Part and ATR for single human parsing, CIHP and MHP for multi-person human parsing and VIP for video human parsing tasks. In addition, benefiting the superiority of SCHP, we achieved the 1st place on all the three human parsing tracks in the 3rd Look Into Person Challenge. The code is available at https://github.com/PeikeLi/Self-Correction-Human-Parsing.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.