Abstract

Accurate tissue segmentation on MRI is important for physicians to make diagnosis and treatment for patients. However, most of the models are only designed for single-task tissue segmentation, and tend to lack generality to other MRI tissue segmentation tasks. Not only that, the acquisition of labels is time-consuming and laborious, which remains a challenge to be solved. In this study, we propose the universal Fusion-Guided Dual-View Consistency Training(FDCT) for semi-supervised tissue segmentation on MRI. It can obtain accurate and robust tissue segmentation for multiple tasks, and alleviates the problem of insufficient labeled data. Especially, for building bidirectional consistency, we feed dual-view images into a single-encoder dual-decoder structure to obtain view-level predictions, then put them into a fusion module to generate image-level pseudo-label. Moreover, to improve boundary segmentation quality, we propose the Soft-label Boundary Optimization Module(SBOM). We have conducted extensive experiments on three MRI datasets to evaluate the effectiveness of our method. Experimental results demonstrate that our method outperforms the state-of-the-art semi-supervised medical image segmentation methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.