Abstract
Accurate measurement of brain structures is essential for the evaluation of neonatal brain growth and development. The conventional methods use manual segmentation to measure brain tissues, which is very time-consuming and inefficient. Recent deep learning achieves excellent performance in computer vision, but it is still unsatisfactory for segmenting magnetic resonance images of neonatal brains because they are immature with unique attributes. In this paper, we propose a novel attention-modulated multi-branch convolutional neural network for neonatal brain tissue segmentation. The proposed network is built on the encoder-decoder framework by introducing both multi-scale convolutions in the encoding path and multi-branch attention modules in the decoding path. Multi-scale convolutions with different kernels are used to extract rich semantic features across large receptive fields in the encoding path. Multi-branch attention modules are used to capture abundant contextual information in the decoding path for segmenting brain tissues by fusing both local features and their corresponding global dependencies. Spatial attention connections between the encoding and decoding paths are designed to increase feature propagation for both avoiding information loss during downsampling and accelerating model training convergence. The proposed network was implemented in comparison with baseline methods on three neonatal brain datasets. Our network achieves the average Dice similarity coefficients/the average Hausdorff distances of 0.9116/8.1289, 0.9367/9.8212 and 0.8931/8.1612 on the customized dCBP2021 dataset, 0.8786/11.7863, 0.8965/13.4296 and 0.8539/10.462 on the public NBAtlas dataset, as well as 0.9253/7.7968, 0.9448/9.5472 and 0.9132/7.5877 on the public dHCP2017 dataset in partitioning the brain into gray matter, white matter and cerebrospinal fluid, respectively. The experimental results show that the proposed method achieves competitive state-of-the-art performance in neonatal brain tissue segmentation. The code and pre-trained models are available at https://github.com/zhangyongqin/AMCNN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.