Abstract
The success of deep learning, a subfield of Artificial Intelligence technologies in the field of image analysis and computer can be leveraged for building better decision support systems for clinical radiological settings. Detecting and segmenting tumorous tissues in brain region using deep learning and artificial intelligence is one such scenario, where radiologists can benefit from the computer based second opinion or decision support, for detecting the severity of disease, and survival of the subject with an accurate and timely clinical diagnosis. Gliomas are the aggressive form of brain tumors having irregular shape and ambiguous boundaries, making them one of the hardest tumors to detect, and often require a combined analysis of different types of radiological scans to make an accurate detection. In this paper, we present a fully automatic deep learning method for brain tumor segmentation in multimodal multi-contrast magnetic resonance image scans. The proposed approach is based on light weight UNET architecture, consisting of a multimodal CNN encoder-decoder based computational model. Using the publicly available Brain Tumor Segmentation (BraTS) Challenge 2018 dataset, available from the Medical Image Computing and Computer Assisted Intervention (MICCAI) society, our novel approach based on proposed light-weight UNet model, with no data augmentation requirements and without use of heavy computational resources, has resulted in an improved performance, as compared to the previous models in the challenge task that used heavy computational architectures and resources and with different data augmentation approaches. This makes the model proposed in this work more suitable for remote, extreme and low resource health care settings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.