Abstract

Objective. Training neural networks for pixel-wise or voxel-wise image segmentation is a challenging task that requires a considerable amount of training samples with highly accurate and densely delineated ground truth maps. This challenge becomes especially prominent in the medical imaging domain, where obtaining reliable annotations for training samples is a difficult, time-consuming, and expert-dependent process. Therefore, developing models that can perform well under the conditions of limited annotated training data is desirable. Approach. In this study, we propose an innovative framework called the extremely sparse annotation neural network (ESA-Net) that learns with only the single central slice label for 3D volumetric segmentation which explores both intra-slice pixel dependencies and inter-slice image correlations with uncertainty estimation. Specifically, ESA-Net consists of four specially designed distinct components: (1) an intra-slice pixel dependency-guided pseudo-label generation module that exploits uncertainty in network predictions while generating pseudo-labels for unlabeled slices with temporal ensembling; (2) an inter-slice image correlation-constrained pseudo-label propagation module which propagates labels from the labeled central slice to unlabeled slices by self-supervised registration with rotation ensembling; (3) a pseudo-label fusion module that fuses the two sets of generated pseudo-labels with voxel-wise uncertainty guidance; and (4) a final segmentation network optimization module to make final predictions with scoring-based label quantification. Main results. Extensive experimental validations have been performed on two popular yet challenging magnetic resonance image segmentation tasks and compared to five state-of-the-art methods. Significance. Results demonstrate that our proposed ESA-Net can consistently achieve better segmentation performances even under the extremely sparse annotation setting, highlighting its effectiveness in exploiting information from unlabeled data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.