Abstract
The label noise-tolerant deep learning methods have been exploited extensively in pattern classification tasks, but still less studied in image segmentation tasks. Especially, the segmentation annotations are prone to suffering from inter-rater variability and subjectivity in medical image segmentation tasks. The study of the label noise-tolerant learning with corrupted label is an important and active topic. In this paper, we propose a model-driven self-aware self-training segmentation framework for label noise-tolerant learning. “Model-driven” means that we propose a model state estimation method to discover the early learning time from noisy labels in different segmentation tasks. Second, we propose a pseudo label generation method to modify the training process with noisy annotations. The pseudo label generation is based on the prototypical feature learning without extra parameters. This method exploits the similarity and consistency from the pixel-wise feature to the global prototypes, which can effectively remedy the incorrect labels, thus called “self-aware”. Further, we introduce the self-training with the generated pseudo label to boost robustness against noisy labels. Extensive experiments on three public datasets demonstrate that our method outperforms other popular anti-noise segmentation methods, and has shown effectiveness and stability when facing with different kinds of label noises. Our method improves the direct training by 5.80% Dsc on PROMISE12 dataset under annotations with 75% noise ratio.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.