Abstract

Segmentation of a brain tumor from magnetic resonance multimodal images is a challenging task in the field of medical imaging. The vast diversity in potential target regions, appearance and multifarious intensity threshold levels of various tumor types are few of the major factors that affect segmentation results. An accurate diagnosis and its treatment demand strict delineation of the tumor affected tissues. Herein, we focus on a smart, automated, and robust segmentation approach for brain tumor using a modified 3D U-Net architecture. The pre-operative multimodal 3D-MRI scans of High-Grade Glioma (HGG) and Low-Grade Glioma (LGG) are used as data. Our proposed approach solves the problem of memory and system resource constraints by robustly applying dense network training on image patches of 3D volumes. It improves the border region artifact detection by applying convolutions at an appropriate phase in the proposed neural network. Multi-class imbalance data are handled by using Categorical Cross Entropy (CCE) loss developed by combining the Weighted Cross Entropy (WCE) with Weighted Multi-class Dice Loss (WMDL) functions, which enables the network to perform smart segmentation of the smaller tumorous regions. The proposed approach is tested and evaluated for the challenge datasets of multimodal MRI volumes of tumor patients. Experiments are performed to compute the average dice scores on BraTS-2019 and BraTS-2020 datasets for the whole tumor region.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call