MB-MSAT-Net: A Multi-Branch Multi-Scale Attention Framework with EFO and TabNet for Accurate and Interpretable Brain Tumor Classification
MB-MSAT-Net: A Multi-Branch Multi-Scale Attention Framework with EFO and TabNet for Accurate and Interpretable Brain Tumor Classification
- Research Article
2
- 10.11591/ijeecs.v34.i2.pp825-834
- May 1, 2024
- Indonesian Journal of Electrical Engineering and Computer Science
Early identification and treatment of brain tumors depend critically on accurate classification. Accurate brain tumor classification in medical imaging is essential for clinical decisions and individualized treatment plans. This paper introduces a novel method for classifying brain tumors called multimodal fusion deep transfer learning (MMFDTL) using original, contoured, and annotated magnetic resonance imaging (MRI) images to showcase its capabilities. The MMFDTL can capture complex tumor features frequently missed in analyzing individual modalities. The MMFDTL model employs three deep learning models for extracting features VGG16, Inception V3, and ResNet 50. The accuracy rate improves when combined with decision based multimodal fusion. It produces impressive outcomes of sensitivity 92.96%, specificity 98.54%, precision 93.60%, accuracy 98.80%, F1-score 93.26%, and kappa 91.86%. This research can improve medical imaging and brain tumor analysis through its multi modal fusion approach. It could give healthcare practitioners vital insights for personalized treatment plans and informed decision making.
- Research Article
- 10.1038/s41598-025-23100-0
- Oct 31, 2025
- Scientific Reports
Automated brain tumor detection represents a fundamental challenge in contemporary medical imaging, demanding both precision and computational feasibility for practical implementation. This research introduces a novel Vision Transformer (ViT) framework that incorporates an innovative Hierarchical Multi-Scale Attention (HMSA) methodology for automated detection and classification of brain tumors across four distinct categories: glioma, meningioma, pituitary adenoma, and healthy brain tissue. Our methodology presents several key innovations: (1) multi-resolution patch embedding strategy enabling feature extraction across different spatial scales (8times8, 16times16, and 32times32 patches), (2) computationally optimized transformer architecture achieving 35% reduction in training duration compared to conventional ViT implementations, and (3) probabilistic calibration mechanism enhancing prediction confidence for decision-making applications. Experimental validation was conducted using a comprehensive MRI dataset comprising 7023 T1-weighted contrast-enhanced images sourced from the publicly accessible Brain Tumor MRI Dataset. Our approach achieved superior classification performance with 98.7% accuracy while demonstrating significant improvements over conventional machine learning methodologies (Random Forest: 91.2%, Support Vector Machine: 89.8%, XGBoost: 92.5%), state-of-the-art CNN architectures (EfficientNet-B0: 96.5%, ResNet-50: 95.8%), standard transformers (ViT: 96.8%, Swin Transformer: 97.2%), and hybrid CNN-Transformer approaches (TransBTS: 96.9%, Swin-UNet: 96.6%). The model demonstrates excellent performance with precision of 0.986, recall of 0.988, F1-score of 0.987, and superior calibration quality (Expected Calibration Error: 0.023). The proposed framework establishes a computationally efficient approach for accurate brain tumor classification.
- Research Article
- 10.1016/j.jgeb.2026.100658
- Jan 15, 2026
- Journal of Genetic Engineering & Biotechnology
ResSGA-Net: A deep learning approach for enhanced brain tumor detection and accurate classification in healthcare imaging systems
- Research Article
72
- 10.1016/j.jmoldx.2012.10.001
- Dec 31, 2012
- The Journal of Molecular Diagnostics
Blinded Comparator Study of Immunohistochemical Analysis versus a 92-Gene Cancer Classifier in the Diagnosis of the Primary Site in Metastatic Tumors
- Research Article
1
- 10.54216/jisiot.150101
- Jan 1, 2025
- Journal of Intelligent Systems and Internet of Things
Accurate detection and classification of brain tumors are essential for timely diagnosis and effective treatment planning. This study presents an integrated framework leveraging both machine learning (ML) and deep learning (DL) models for brain tumor detection and classification using MRI images. Two publicly available datasets are utilized: one for binary classification (tumor vs. no tumor) and another for multiclass classification (glioma, meningioma, and pituitary tumors). Comprehensive preprocessing steps, including resizing, feature extraction using the Gray Level Co-occurrence Matrix (GLCM), and feature selection via Chi-square testing, were employed to optimize the dataset for modeling. Machine learning models such as Decision Trees, K-Nearest Neighbors (KNN), Support Vector Machines (SVM), and AdaBoost were compared with deep learning architectures like Convolutional Neural Networks (CNNs) and the pre-trained VGG16 model. Hyperparameter optimization techniques, including grid search and the Adam optimizer, were used to enhance model performance. The models were evaluated using metrics such as accuracy, precision, recall, F1-score, Mean Squared Error (MSE), and Mean Absolute Error (MAE). Results indicate that the VGG16 model consistently outperformed other approaches, achieving high validation accuracy. This study highlights the potential of integrating ML and DL techniques for accurate and efficient brain tumor detection and classification, offering valuable tools for medical diagnostics.
- Research Article
25
- 10.1007/s10916-018-1153-9
- Jan 5, 2019
- Journal of Medical Systems
The colon cancer is formed by uncontrollable growth of abnormal cells in large intestine or colon that can affect both men and women and it is third cancer disease in the world. At present, Wireless Capsule Endoscopy (WCE) screening method is utilized to identify colon cancer tumor at early stage to save the patient life who affected by the colon cancer. In this CTC method, the radiologist needs to analyze the colon polyps in digital image using computer aided approach with accurate automatic tumor classification to detect the cancer tumor at early stage. This kind of computer aided approach can operate as an intermediate between input digital image and radiologist. Therefore, in this paper, a novel computer aided approach is presented with ROI based color histogram and SVM2 to find the cancer tumor in WCE image. In this method, the digital WCE image can be preprocessed using filtering and ROI based color histogram depending on the salient region in colon. In common, the salient region can be distinctive because of low redundancy. Hence, the saliency is estimated by ROI based color histogram on the basis of color and structure contrast in given colon image for the further process of clustering and tumor classification in WCE image. The K-means clustering can be employed to cluster the preprocessed digital image to discover the tumor of colon. Subsequently, the features are extracted from the image in terms of contrast, correlation, energy and homogeneity by applying SGLDM method. The SVM2 classifier as input to classify the tumor is normal or malignancy using selected feature vectors. Here, the extracted features can also being combined to enhance the hybrid feature vector for the accurate tumor classification. Experimental results of proposed method can show that this presented technique can executes can tumor detection in colon image accurately reaching almost 95% in evaluation with existing algorithms.
- Research Article
1
- 10.4108/eetpht.10.5627
- Apr 3, 2024
- EAI Endorsed Transactions on Pervasive Health and Technology
INTRODUCTION: Cancer remains a significant health concern, with early detection crucial for effective treatment. Brain tumors, in particular, require prompt diagnosis to improve patient outcomes. Computational models, specifically deep learning (DL), have emerged as powerful tools in medical image analysis, including the detection and classification of brain tumors. DL leverages multiple processing layers to represent data, enabling enhanced performance in various healthcare applications.
 OBJECTIVES: This paper aims to discuss key topics in DL relevant to the analysis of brain tumors, including segmentation, prediction, classification, and assessment. The primary objective is to employ magnetic resonance imaging (MRI) pictures for the identification and categorization of brain malignancies. By reviewing prior research and findings comprehensively, this study provides valuable insights for academics and professionals in deep learning seeking to contribute to brain tumor identification and classification.
 METHODS: The methodology involves a systematic review of existing literature on DL applications in brain tumor analysis, focusing on MRI imaging. Various DL techniques, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and hybrid models, are explored for their efficacy in tasks such as tumor segmentation, prediction of tumor characteristics, classification of tumor types, and assessment of treatment response.
 RESULTS: The review reveals significant advancements in DL-based approaches for brain tumor analysis, with promising results in segmentation accuracy, tumor subtype classification, and prediction of patient outcomes. Researchers have developed sophisticated DL architectures tailored to address the complexities of brain tumor imaging data, leading to improved diagnostic capabilities and treatment planning.
 CONCLUSION: Deep learning holds immense potential for revolutionizing the diagnosis and management of brain tumors through MRI-based analysis. This study underscores the importance of leveraging DL techniques for accurate and efficient brain tumor identification and classification. By synthesizing prior research and highlighting key findings, this paper provides valuable guidance for researchers and practitioners aiming to contribute to the field of medical image analysis and improve outcomes for patients with brain malignancies.
- Research Article
23
- 10.1186/s12880-024-01261-0
- May 15, 2024
- BMC Medical Imaging
Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model’s performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model’s efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.
- Research Article
- 10.1142/s0218001424570155
- Jan 1, 2025
- International Journal of Pattern Recognition and Artificial Intelligence
Diagnosing brain tumors is particularly difficult because they can grow in unpredictable ways and look very different on MRI scans. The current methods used to automatically identify these tumors often struggle because of the wide variety of tumor types and the complex structure of the brain. As a result, these methods don’t always classify tumors accurately, which can affect patient treatment and outcomes. The main problems with these methods are that they find it hard to distinguish between different types of tumors accurately and to deal with the various ways tumors can appear on MRI scans. To improve this situation, our study integrates the robust image classification capabilities of VGG19 with the sequential data processing strengths of LSTM. This synergistic approach enhances our model’s ability to accurately classify various types of brain tumors from MRI scans, addressing the inherent challenges associated with tumor heterogeneity in medical imaging. VGG19, a deep convolutional neural network, is employed to extract detailed features from MRI scans, facilitating precise tumor characterization based on visual patterns and LSTM complements VGG19 by capturing temporal dependencies in the sequential data of MRI scans, enabling the model to discern subtle variations in tumor appearances over time. By leveraging the combined power of VGG19 and LSTM architectures, our study achieves significant advancements in the accurate classification of brain tumors from MRI images. This approach not only enhances diagnostic precision but also lays the groundwork for future improvements in neuro-oncological imaging diagnostics. Our study includes 1000 patients evaluated with MRI for brain tumors. We achieved an overall accuracy of 98.32% demonstrating the efficacy of our VGG19-LSTM model in accurate tumor classification. By using both, our model aims to get better at understanding MRI scans and, as a result, be more accurate at identifying brain tumors. This combination is a new step forward in making brain tumor diagnosis more precise through a detailed and cooperative approach using neural networks.
- Research Article
- 10.1093/narcan/zcaf038
- Oct 30, 2025
- NAR Cancer
Oxford Nanopore Technology (ONT)-based methylation sequencing is emerging as a powerful approach for the rapid and accurate classification of brain tumors, an essential component of precision oncology. However, its broader clinical adoption has been limited by reliance on fresh-frozen (FF) tissue, whereas the vast majority of clinical specimens are formalin-fixed paraffin-embedded (FFPE). In this study, we address this limitation by evaluating the effects of FFPE processing on DNA methylation profiles and introducing a validated protocol for ONT-based classification using DNA extracted directly from pathology-marked regions on stained FFPE slides. This approach enables the targeted selection of tumor-rich areas following histological assessment, thereby improving DNA input quality and tumor content. We demonstrate that even small, low-input samples (≥25 ng) can be successfully classified using this method, with high concordance to final integrated neuropathological diagnoses. Our results show that, despite modest methylation loss associated with formalin fixation, classification performance remains robust. Notably, we identify a correlation between methylation degradation and fixation time, supporting a recommendation to limit formalin exposure to ≤3–4 days when possible. By enabling accurate methylation-based tumor classification from routinely processed, stained FFPE tissue, our protocol integrates seamlessly into existing clinical workflows. This expands the accessibility of ONT-based diagnostics and supports informed, timely treatment decisions—even in cases with minimal tissue availability or urgent clinical need.
- Research Article
30
- 10.3390/sym15030571
- Feb 22, 2023
- Symmetry
A brain tumor can have an impact on the symmetry of a person’s face or head, depending on its location and size. If a brain tumor is located in an area that affects the muscles responsible for facial symmetry, it can cause asymmetry. However, not all brain tumors cause asymmetry. Some tumors may be located in areas that do not affect facial symmetry or head shape. Additionally, the asymmetry caused by a brain tumor may be subtle and not easily noticeable, especially in the early stages of the condition. Brain tumor classification using deep learning involves using artificial neural networks to analyze medical images of the brain and classify them as either benign (not cancerous) or malignant (cancerous). In the field of medical imaging, Convolutional Neural Networks (CNN) have been used for tasks such as the classification of brain tumors. These models can then be used to assist in the diagnosis of brain tumors in new cases. Brain tissues can be analyzed using magnetic resonance imaging (MRI). By misdiagnosing forms of brain tumors, patients’ chances of survival will be significantly lowered. Checking the patient’s MRI scans is a common way to detect existing brain tumors. This approach takes a long time and is prone to human mistakes when dealing with large amounts of data and various kinds of brain tumors. In our proposed research, Convolutional Neural Network (CNN) models were trained to detect the three most prevalent forms of brain tumors, i.e., Glioma, Meningioma, and Pituitary; they were optimized using Aquila Optimizer (AQO), which was used for the initial population generation and modification for the selected dataset, dividing it into 80% for the training set and 20% for the testing set. We used the VGG-16, VGG-19, and Inception-V3 architectures with AQO optimizer for the training and validation of the brain tumor dataset and to obtain the best accuracy of 98.95% for the VGG-19 model.
- Research Article
5
- 10.3390/bdcc9020029
- Jan 31, 2025
- Big Data and Cognitive Computing
For the past few decades, brain tumors have had a substantial influence on human life, and pose severe health risks if not treated and diagnosed in the early stages. Brain tumor problems are highly diverse and vary extensively in terms of size, type, and location. This brain tumor diversity makes it challenging to progress an accurate and reliable diagnostic tool. In order to effectively segment and classify the tumor region, still several developments are required to make an accurate diagnosis. Thus, the purpose of this research is to accurately segment and classify brain tumor Magnetic Resonance Images (MRI) to enhance diagnosis. Primarily, the images are collected from BraTS 2019, 2020, and 2021 datasets, which are pre-processed using min–max normalization to eliminate noise. Then, the pre-processed images are given into the segmentation stage, where a Variational Spatial Attention with Graph Convolutional Neural Network (VSA-GCNN) is applied to handle the variations in tumor shape, size, and location. Then, the segmented outputs are processed into feature extraction, where an AlexNet model is used to reduce the dimensionality. Finally, in the classification stage, a Bidirectional Gated Recurrent Unit (Bi-GRU) is employed to classify the brain tumor regions as gliomas and meningiomas. From the results, it is evident that the proposed VSA-GCNN-BiGRU shows superior results on the BraTS 2019 dataset in terms of accuracy (99.98%), sensitivity (99.92%), and specificity (99.91%) when compared with existing models. By considering the BraTS 2020 dataset, the proposed VSA-GCNN-BiGRU shows superior results in terms of Dice similarity coefficient (0.4), sensitivity (97.7%), accuracy (98.2%), and specificity (97.4%). While evaluating with the BraTS 2021 dataset, the proposed VSA-GCNN-BiGRU achieved specificity of 97.6%, Dice similarity of 98.6%, sensitivity of 99.4%, and accuracy of 99.8%. From the overall observation, the proposed VSA-GCNN-BiGRU supports accurate brain tumor segmentation and classification, which provides clinical significance in MRI when compared to existing models.
- Research Article
1
- 10.52783/tjjpt.v44.i4.1184
- Oct 26, 2023
- Tuijin Jishu/Journal of Propulsion Technology
Brain tumor classification plays a crucial role in early diagnosis and effective treatment planning. In this paper, we propose a novel approach, K-Nearest Neighbor with Convolutional Neural Networks (KNN-CNN), for accurate brain tumor classification. The proposed method combines the strengths of K-Nearest Neighbor (KNN) and Convolutional Neural Networks (CNNs) to leverage both traditional feature-based classification and deep learning-based feature extraction. We use CNNs to learn high-level features from brain tumor images, and KNN is employed to classify tumors based on the extracted features. The experimental results on a brain tumor dataset demonstrate the effectiveness and efficiency of the KNN-CNN approach, achieving high classification accuracy and outperforming traditional methods.
- Supplementary Content
62
- 10.1155/2022/7348344
- Jan 1, 2022
- BioMed Research International
This work delivers a novel technique to detect brain tumor with the help of enhanced watershed modeling integrated with a modified ResNet50 architecture. It also involves stochastic approaches to help in developing enhanced watershed modeling. Cancer diseases, primarily the brain tumor, have been exponentially raised which has alarmed researchers from academia and industry. Nowadays, researchers need to attain a more effective, accurate, and trustworthy brain tumor tissue detection and classification approach. Different from traditional machine learning methods that are just targeting to enhance classification efficiency, this work highlights the process to extract several deep features to diagnose brain tumor effectively. This paper explains the modeling of a novel technique by integrating the modified ResNet50 with the Enhanced Watershed Segmentation (EWS) algorithm for brain tumor classification and deep feature extraction. The proposed model uses the ResNet50 model with a modified layer architecture including five convolutional layers and three fully connected layers. The proposed method can retain the optimal computational efficiency with high-dimensional deep features. This work obtains a comprised feature set by retrieving the diverse deep features from the ResNet50 deep learning model and feeds them as input to the classifier. The good performing capability of the proposed model is achieved by using hybrid features of ResNet50. The brain tumor tissue images were extracted by the suggested hybrid deep feature-based modified ResNet50 model and the EWS-based modified ResNet50 model with a high classification accuracy of 92% and 90%, respectively.
- Research Article
14
- 10.1515/bmt-2015-0071
- Aug 6, 2015
- Biomedizinische Technik. Biomedical engineering
This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.