Abstract
Background and objectiveAutomatic thymoma segmentation in preoperative contrast-enhanced computed tomography (CECT) images makes great sense for diagnosis. Although convolutional neural networks (CNNs) are distinguished in medical image segmentation, they are challenged by thymomas with various shapes, scales and textures, owing to the intrinsic locality of convolution operations. In order to overcome this deficit, we built a deep learning network with enhanced global-awareness for thymoma segmentation. MethodsWe propose a multi-level global-aware network (MG-Net) for thymoma segmentation, in which the multi-level feature interaction and integration are jointly designed to enhance the global-awareness of CNNs. Particularly, we design the cross-attention block (CAB) to calculate pixel-wise interactions of multi-level features, resulting in the Global Enhanced Convolution Block, which can enable the network to handle various thymomas by strengthening the global-awareness of the encoder. We further devise the Global Spatial Attention Module to integrate coarse- and fine-grain information for enhancing the semantic consistency between the encoder and decoder with CABs. We also develop an Adaptive Attention Fusion Module to adaptively aggregate different semantic-scale features in the decoder to preserve comprehensive details. ResultsThe MG-Net has been evaluated against several state-of-the-art models on the self-collected CECT dataset and NIH Pancreas-CT dataset. Results suggest that all designed components are effective, and MG-Net has superior segmentation performance and generalization ability over existing models. ConclusionBoth the qualitative and quantitative experimental results indicate that our MG-Net with global-aware ability can achieve accurate thymoma segmentation and has generalization ability in different tasks. The code is available at: https://github.com/Leejyuan/MGNet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.