Abstract

Classification of brain tumors is one of the daunting tasks in medical imaging and incorrect decisions during the diagnosis process may lead to increased human fatality. Latest advances in artificial intelligence and deep learning have opened the path for the success of numerous medical image analysis tasks, including the recognition of brain tumors. In this paper, we propose a simple architecture based on deep learning that results in strong generalization without needing much preprocessing. The proposed Multi-modal Squeeze and Excitation model (MSENet) receives multiple representations of a given tumor image, learns end-to-end and effectively predicts the severity level of tumor. Convolution feature descriptors from multiple deep pre-trained models are used to effectively describe the tumor images and are supplied as input to the proposed MSENet. The squeeze and excitation blocks of the MSENet allow the model to prioritize tumor regions while giving less emphasis to the rest of the image, serving as an attention mechanism in the model. The proposed model is evaluated on the benchmark brain tumor dataset that is publicly accessible from the Figshare repository. Experimental studies reveal that in terms of model parameters, the proposed approach is simple and leads to competitive performance compared to those of existing complex models. By increasing complexity, the proposed model leads to generalize better and achieves state-of-the-art accuracy of 96.05% on Figshare dataset. Compared to the existing models, the proposed model neither uses segmentation nor uses augmentation techniques and without much pre-processing achieves competitive performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.