Abstract

MRI is a broadly used imaging method to determine glioma-based tumors. During image processing, MRI provides large image information, and therefore, an accurate image processing must be carried out in clinical practices. Therefore, automatic and consistent methods are requisite for knowing the precise details of the image. The automated segmentation method inheres obstacles like inconsistency in tracing out the large spatial and structural inconsistency of brain tumors. In this work, a semantic-based U-NET-convolutional neural networks exploring a 3∗3 kernel’s size is proposed. Small kernels have an effect against overfitting in the deeper architecture and provide only a smaller number of weights in this network. Multiscale multimodal convolutional neural network (MSMCNN) with long short-term memory (LSTM)-based deep learning semantic segmentation technique is used for multimodalities of magnetic resonance images (MRI). The proposed methodology aims to identify and segregate the classes of tumors by analyzing every pixel in the image. Further, the performance of semantic segmentation is enhanced by applying a patch-wise classification technique. In this work, multiscale U-NET-based deep convolution network is used for classifying the multimodal convolutions into three different scale patches based on a pixel level. In order to identify the tumor classes, all three pathways are combined in the LSTM network. The proposed methodology is validated by a fivefold cross-validation scheme from MRI BRATS’15 dataset. The experiment outcomes show that the MSMCNN model outperforms the CNN-based models over the Dice coefficient and positive predictive value and obtains 0.9214 sensitivity and 0.9636 accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call