Abstract

Background Parenchymal Enhancement (BPE) quantification in Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) plays a pivotal role in clinical breast cancer diagnosis and prognosis. However, the emerging deep learning-based breast fibroglandular tissue segmentation, a crucial step in automated BPE quantification, often suffers from limited training samples with accurate annotations. To address this challenge, we propose a novel iterative cycle-consistent semi-supervised framework to leverage segmentation performance by using a large amount of paired pre-/post-contrast images without annotations. Specifically, we design the reconstruction network, cascaded with the segmentation network, to learn a mapping from the pre-contrast images and segmentation predictions to the post-contrast images. Thus, we can implicitly use the reconstruction task to explore the inter-relationship between these two-phase images, which in return guides the segmentation task. Moreover, the reconstructed post-contrast images across multiple auto-context modeling-based iterations can be viewed as new augmentations, facilitating cycle-consistent constraints across each segmentation output. Extensive experiments on two datasets with various data distributions show great segmentation and BPE quantification accuracy compared with other state-of-the-art semi-supervised methods. Importantly, our method achieves 11.80 times of quantification accuracy improvement along with 10 times faster, compared with clinical physicians, demonstrating its potential for automated BPE quantification. The code is available at https://github.com/ZhangJD-ong/Iterative-Cycle-consistent-Semi-supervised-Learning-for-fibroglandular-tissue-segmentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.