Abstract

Medical image segmentation has the significance of research in digital image processing. It can locate and identify the organ cells, which is essential for clinical analysis, diagnosis, and treatment. Since the high heterogeneity of pathological tissues and the inconspicuous resolution in multimodal magnetic resonance images, we propose a multimodal brain tumor image segmentation method based on ACU-Net network. In the beginning, we preprocess brain images to ensure the balanced number of categories. We adopt deep separable convolutional layers to replace the ordinary architecture in the U-Net to distinguish the spatial correlation and appearance correlation of the mapped convolutional channel. We introduce residual skip connection into the ACU-Net to heighten the propagation capacity of features and quicken the convergence speed of the network, to realize the capture of deep abnormal regions. We use the active contour model to against the image noise and edge cracks, come true the tracking of tumor deformation and solve the problem of edge blur in edema area, so as to divide the tumor core and enhanced necrotic parenchymal area exactly in the abnormal area. In this paper,17926 MRI images of 335 patients in the BraTS 2015, BraTS 2018, and BraTS 2019 datasets are used for training and verifying. Our experiments demonstrate that ACU-Net network has better performance than the other segmentation algorithms in subjective vision and objective indicators when applied to brain tumor image segmentation.

Highlights

  • As a neurosurgical disease, the incidence of brain tumors is lower than that of stomach, breast, uterus, and esophageal tumors, but the mortality rate is much higher than other tumors

  • Benign tumors grow slowly and have no ability to infiltrate and metastasize. 80% of malignant tumors are gliomas and metastases, and gliomas [1] can be divided into Low-grade glioma cases (LGG) and High-grade glioma cases (HGG) according to their aggressiveness

  • In response to the above problems, we propose a multimodal brain tumor image segmentation method based on the ACU-Net network

Read more

Summary

INTRODUCTION

The incidence of brain tumors is lower than that of stomach, breast, uterus, and esophageal tumors, but the mortality rate is much higher than other tumors. Badrinarayanan et al [4] proposed a pixel-level segmentation network (SegNet), which considers spatial consistency and optimize training with stochastic gradient-descent, focusing on the advantages of memory usage and computational efficiency. He et al [5] proposed an adaptive pyramid context model (APCNet), which adopts Global-guided Local Affinity (GLA) to add all features from related pixels or regions to construct adaptively multi-scale context vector. L. Tan et al.: Multimodal Magnetic Resonance Image Brain Tumor Segmentation Based on ACU-Net Network following challenges: Firstly, the dataset is small. In response to the above problems, we propose a multimodal brain tumor image segmentation method based on the ACU-Net network. 3) We insert the active contour model in the ACU-Net to ensure the fitting between the inside and outside of the boundary, to improve the accuracy of segmentation

RELATED WORK
DEPTHWISE SEPARABLE CONVOLUTION
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call