Abstract

AbstractWith the social and economic development and the improvement of people's living standards, smart medical care is booming, and medical image processing is becoming more and more popular in research, of which brain tumor segmentation is an important branch of medical image processing. However, the manual segmentation method of brain tumors requires a lot of time and effort from the doctor and has a great impact on the treatment of patients. In order to solve this problem, we propose a DO‐UNet model for magnetic resonance imaging brain tumor image segmentation based on attention mechanism and multi‐scale feature fusion to realize fully automatic segmentation of brain tumors. Firstly, we replace the convolution blocks in the original U‐Net model with the residual modules to prevent the gradient disappearing. Secondly, the multi‐scale feature fusion is added to the skip connection of U‐Net to fuse the low‐level features and high‐level features more effectively. In addition, in the decoding stage, we add an attention mechanism to increase the weight of effective information and avoid information redundancy. Finally, we replace the traditional convolution in the model with DO‐Conv to speed up the network training and improve the segmentation accuracy. In order to evaluate the model, we used the BraTS2018, BraTS2019, and BraTS2020 datasets to train the improved model and validate it online, respectively. Experimental results show that the DO‐UNet model can effectively improve the accuracy of brain tumor segmentation and has good segmentation performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.