Brain tumors are widely recognized as the primary cause of cancer-related mortality globally, necessitating precise detection to enhance patient survival rates. The early identification of brain tumor is presented with significant challenges in the healthcare domain, necessitating the implementation of precise and efficient diagnostic methodologies. The manual identification and analysis of extensive MRI data are presented as a challenging and laborious task, compounded by the importance of early tumor detection in reducing mortality rates. Prompt initiation of treatment hinges upon identifying the specific tumor type in patients, emphasizing the urgency for a dependable deep learning methodology for precise diagnosis. In this research, a hybrid model is presented which integrates the strengths of both transfer learning and the transformer encoder mechanism. After the performance evaluation of the efficacy of six pre-existing deep learning model, both individually and in combination, it was determined that an ensemble of three pretrained models achieved the highest accuracy. This ensemble, comprising DenseNet201, GoogleNet (InceptionV3), and InceptionResNetV2, is selected as the feature extraction framework for the transformer encoder network. The transformer encoder module integrates a Shifted Window-based Self-Attention mechanism, sequential Self-Attention, with a multilayer perceptron layer (MLP). These experiments were conducted on three publicly available research datasets for evaluation purposes. The Cheng dataset, BT-large-2c, and BT-large-4c dataset, each designed for various classification tasks with differences in sample number, planes, and contrast. The model gives consistent results on all three datasets and reaches an accuracy of 99.34%, 99.16%, and 98.62%, respectively, which are improved compared to other techniques.
Read full abstract