Abstract
Since segmentation labeling is usually time-consuming and annotating medical images requires professional expertise, it is laborious to obtain a large-scale, high-quality annotated segmentation dataset. We propose a novel weakly- and semi-supervised framework named SOUSA (Segmentation Only Uses Sparse Annotations), aiming at learning from a small set of sparse annotated data and a large amount of unlabeled data. The proposed framework contains a teacher model and a student model. The student model is weakly supervised by scribbles and a Geodesic distance map derived from scribbles. Meanwhile, a large amount of unlabeled data with various perturbations are fed to student and teacher models. The consistency of their output predictions is imposed by Mean Square Error (MSE) loss and a carefully designed Multi-angle Projection Reconstruction (MPR) loss. Extensive experiments are conducted to demonstrate the robustness and generalization ability of our proposed method. Results show that our method outperforms weakly- and semi-supervised state-of-the-art methods on multiple datasets. Furthermore, our method achieves a competitive performance with some fully supervised methods with dense annotation when the size of the dataset is limited.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.