Abstract

Background and Objective. Automatic segmentation of MRI brain tumor area is a key step in the diagnosis and treatment of brain tumor. In recent years, the improved network based on UNet encoding and decoding structure has been widely used in brain tumor segmentation. However, due to continuous convolution and pooling operations, some spatial context information in existing networks will be discontinuous or even missing. It will affect the segmentation accuracy of the model. Therefore, the method proposed in this paper is to alleviate the lack of spatial context information and improve the accuracy of the model. Approach. This paper proposes a context attention module (multiscale contextual attention) to capture and filter out high-level features with spatial context information, which solves the problem of context information loss in feature extraction. The channel attention mechanism is introduced into the decoding structure to realize the fusion of high-level features and low-level features. The standard convolution block in the encoding and decoding structure is replaced by the pre-activated residual block to optimize the network training and improve the network performance. Results. This paper uses two public data sets (BraTs 2017 and BraTs 2019) to evaluate and verify the proposed method. Experimental results show that the proposed method can effectively alleviate the lack of spatial context information, and the segmentation performance is better than other existing methods. Significance. The method improves the segmentation performance of the model. It will assist doctors in making accurate diagnosis and provide reference basis for tumor resection. As a result, the proposed method will reduce the operation risk of patients and the postoperative recurrence rate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.