Abstract

In medical imaging, extraction of brain tumor region in the magnetic resonance image (MRI) is not sufficient, but finding the tumor extension is necessary to plan best treatment to improve the survival rate as it depends on tumor’s size, location, and patient’s age. Manually extracting the brain tumor sub-regions from MRI volume is tedious, time consuming and the inherently complex brain tumor images requires a proficient radiologist. Thus, a reliable multi-modal deep learning models are proposed for automatic segmentation to extract the sub-regions like enhancing tumor (ET), tumor core (TC), and whole tumor (WT). These models are constructed on the basis of U-net and VGG16 architectures. The whole tumor is obtained by segmenting T2-weighted images and cross-check the edema’s extension in T2 fluid attenuated inversion recovery (FLAIR). ET and TC are both extracted by evaluating the hyper-intensities in T1-weighted contrast enhanced images. The proposed method has produced better results in terms of dice similarity index, Jaccard similarity index, accuracy, specificity, and sensitivity for segmented sub regions. The experimental results on BraTS 2018 database shows the proposed DL model outperforms with average dice coefficients of 0.91521, 0.92811, 0.96702, and Jaccard coefficients of 0.84715, 0.88357, 0.93741 for ET, TC, and WT respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call