Abstract

3D semi-supervised medical image segmentation is extremely essential in computer-aided diagnosis, which can reduce the time-consuming task of performing annotation. The challenges with current 3D semi-supervised segmentation algorithms includes the methods, limited attention to volume- wise context information, their inability to generate accurate pseudo labels and a failure to capture important details during data augmentation. This paper proposes a dual uncertainty-guided mixing consistency network for accurate 3D semi-supervised segmentation, which can solve the above challenges. The proposed network consists of a Contrastive Training Module which improves the quality of augmented images by retaining the invariance of data augmentation between original data and their augmentations. The Dual Uncertainty Strategy calculates dual uncertainty between two different models to select a more confident area for subsequent segmentation. The Mixing Volume Consistency Module that guides the consistency between mixing before and after segmentation for final segmentation, uses dual uncertainty and can fully learn volume- wise context information. Results from evaluative experiments on brain tumor and left atrial segmentation shows that the proposed method outperforms state-of-the-art 3D semi-supervised methods as confirmed by quantitative and qualitative analysis on datasets. This effectively demonstrates that this study has the potential to become a medical tool for accurate segmentation. Code is available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/yang6277/DUMC</uri> .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.