Abstract

Accurate and automated brain tumor segmentation using multi modal MR images is essential for the evaluation of the disease progression in order to improve disease diagnosis and treatment planning. We present a new fully automated method for high graded brain tumor segmentation combining sparse autoencoder and multimodal Fuzzy C-means clustering. The approach utilizes multimodal MRI contrast: T1, T2, FLAIR and T1c (contrast-enhanced) for 15 high graded glioma (HGG) subjects. The objective of the proposed study is to segment tumor tissues from HGG including edema and tumor core within edema. The segmentation was performed on the training data of the multimodal brain tumor image segmentation benchmark 2015. Sparse autoencoder, which is an unsupervised learning algorithm, was used to automatically learn features from unlabeled dataset of tumor in order to segment edema. Followed by edema segmentation, tumor core was segmented from edema using multimodal FCM clustering. Evaluating the performance of the segmentation results with the ground truth yields high dice score (DS) of 0.9866±0.01 and 0.9843±0.01 for edema and tumor core respectively and high Jaccard similarity (JS) of 0.9738±0.02 and 0.9692±0.02 for edema and tumor core respectively; showed high accuracy in segmenting the complex tumor structures from multi-contrast MR scans of HGG patients. We also compared our methodology in terms of segmentation efficiency with some recent techniques reported in proceedings of MICCAI-BRATS challenge 2015.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call