Abstract

Accurate, robust and automatic segmentation of the left atrium (LA) in magnetic resonance images (MRI) is of great significance for studying the LA structure and facilitating the diagnosis and treatment of atrial fibrillation. Semi-supervised learning has attracted great attention in medical image segmentation, since it alleviates the heavy burden of annotating training data. In this paper, we propose a context-aware network called CA-Net for semi-supervised LA segmentation from 3D MRI. The information of 3D MRI to be learned is not only the contextual information in each slice, but also the spatial information among different slices of the data, which is not sufficiently exploit by existing methods. In the proposed CA-Net, a Trans-V module is coined from both Transformers and V-Net, which is able to learn contextual information in 3D MRI. In the training processing, the discriminator with attention mechanisms is introduced to calculate an adversarial loss so that a large amount of unlabeled data can be utilized. Experimental results on the Atrial Segmentation Challenge dataset show that the contextual information is helpful to extract more accurate atrial structures, and the proposed CA-Net achieves better performance than some SOTA semi-supervised networks. Our method achieves dice scores of 88.14% and 90.09% in segmentation results when trained with 10% and 20% of labeled data, respectively. Code will be available at: https://github.com/RhythmI/CA-Net-master.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.