Abstract

Automatic segmentation of medical images is a necessary prerequisite for diagnosing related diseases. Magnetic resonance imaging (MRI) is a widely used non-invasive method in clinical practice, but obtaining reliable labeled training data for deep learning models is time-consuming and challenging. In this study, we propose an adaptive feature aggregation-based multi-task learning for uncertainty-guided semi-supervised image segmentation, dubbed AFAM-Net. The framework leverages a reconstruction task to capture anatomical information from raw images and assist the segmentation task. Moreover, we propose an adaptive feature aggregation strategy that selectively transfers useful features while filtering out irrelevant information, considering the associations between the two tasks. To better utilize unlabeled data, we incorporate dual uncertainty-aware methods to improve the segmentation performance. We evaluate the proposed AFAM-Net on clinical liver, 3D heart, and Cine cardiac MRI datasets. According to the experimental results, our AFAM-Net significantly outperformed other state-of-the-art semi-supervised medical image segmentation algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.