Capsule network-driven feature extraction and ensemble learning for robust lung tumor classification
Capsule network-driven feature extraction and ensemble learning for robust lung tumor classification
12
- 10.1016/j.bspc.2024.107268
- Nov 30, 2024
- Biomedical Signal Processing and Control
5
- 10.1016/j.health.2024.100373
- Jun 1, 2025
- Healthcare Analytics
24
- 10.1016/j.dajour.2023.100307
- Aug 23, 2023
- Decision Analytics Journal
1
- 10.1016/j.bspc.2024.107371
- May 1, 2025
- Biomedical Signal Processing and Control
2815
- 10.1007/s11749-016-0481-7
- Apr 19, 2016
- TEST
19307
- 10.1007/978-0-387-84858-7
- Jan 1, 2009
- 10.22266/ijies2025.0229.58
- Feb 28, 2025
- International Journal of Intelligent Engineering and Systems
1
- 10.1038/s41540-025-00491-4
- Jan 17, 2025
- npj Systems Biology and Applications
- 10.1186/s12877-025-05683-5
- Jan 23, 2025
- BMC Geriatrics
- 10.1177/08953996241313120
- Jan 28, 2025
- Journal of X-ray science and technology
- Research Article
126
- 10.1016/j.compbiomed.2005.04.001
- Jun 23, 2005
- Computers in Biology and Medicine
A novel ensemble machine learning for robust microarray data classification
- Research Article
29
- 10.1016/j.patcog.2008.11.029
- Dec 7, 2008
- Pattern Recognition
A simultaneous learning framework for clustering and classification
- Research Article
- 10.1038/s41598-025-09311-5
- Jul 1, 2025
- Scientific Reports
Brain tumors are a significant contributor to cancer-related deaths worldwide. Accurate and prompt detection is crucial to reduce mortality rates and improve patient survival prospects. Magnetic Resonance Imaging (MRI) is crucial for diagnosis, but manual analysis is resource-intensive and error-prone, highlighting the need for robust Computer-Aided Diagnosis (CAD) systems. This paper proposes a novel hybrid model combining Transfer Learning (TL) and attention mechanisms to enhance brain tumor classification accuracy. Leveraging features from the pre-trained DenseNet201 Convolutional Neural Networks (CNN) model and integrating a Transformer-based architecture, our approach overcomes challenges like computational intensity, detail detection, and noise sensitivity. We also evaluated five additional pre-trained models-VGG19, InceptionV3, Xception, MobileNetV2, and ResNet50V2 and incorporated Multi-Head Self-Attention (MHSA) and Squeeze-and-Excitation Attention (SEA) blocks individually to improve feature representation. Using the Br35H dataset of 3,000 MRI images, our proposed DenseTransformer model achieved a consistent accuracy of 99.41%, demonstrating its reliability as a diagnostic tool. Statistical analysis using Z-test based on Cohen’s Kappa Score, DeLong’s test based on AUC Score and McNemar’s test based on F1-score confirms the model’s reliability. Additionally, Explainable AI (XAI) techniques like Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) enhanced model transparency and interpretability. This study underscores the potential of hybrid Deep Learning (DL) models in advancing brain tumor diagnosis and improving patient outcomes.
- Research Article
- 10.1051/epjconf/202532801033
- Jan 1, 2025
- EPJ Web of Conferences
Accurate detection of brain tumours from MRI scans is essential for early diagnosis and treatment. Traditional approaches, including manual analysis by radiologists and classical machine learning methods relying on handcrafted features, often lack consistency and high accuracy. This study explores VGG16, VGG19, DenseNet121, ResNet50, Ensemble model (VGG16 + DenseNet121), MobileNetV2, and NASNet for automated brain tumour detection. Using the Brain Tumour Classification (MRI) dataset, VGG16 and DenseNet121 achieved the highest accuracy of 94.08%, demonstrating the effectiveness of transfer learning. An ensemble model was also used combing the 2 best models VGG16 and Densenet121 to create a better generalized model with ROC-AUC value of 0.9960. The findings emphasize CNNs' potential in enhancing the efficiency and precision of brain tumour diagnosis.
- Research Article
- 10.3390/diagnostics15192485
- Sep 28, 2025
- Diagnostics (Basel, Switzerland)
Background: The accurate classification of brain tumor subtypes from MRI scans is critical for timely diagnosis, yet the manual annotation of large datasets remains prohibitively labor-intensive. Method: We present SSPLNet (Semi-Supervised Pseudo-Labeling Network), a dual-branch deep learning framework that synergizes confidence-guided iterative pseudo-labelling with deep feature fusion to enable robust MRI-based tumor classification in data-constrained clinical environments. SSPLNet integrates a custom convolutional neural network (CNN) and a pretrained ResNet50 model, trained semi-supervised using adaptive confidence thresholds (τ = 0.98 → 0.95 → 0.90) to iteratively refine pseudo-labels for unlabelled MRI scans. Feature representations from both branches are fused via a dense network, combining localized texture patterns with hierarchical deep features. Results: SSPLNet achieves state-of-the-art accuracy across labelled-unlabelled data splits (90:10 to 10:90), outperforming supervised baselines in extreme low-label regimes (10:90) by up to 5.34% from Custom CNN and 5.58% from ResNet50. The framework reduces annotation dependence and with 40% unlabeled data maintains 98.17% diagnostic accuracy, demonstrating its viability for scalable deployment in resource-limited healthcare settings. Conclusions: Statistical Evaluation and Robustness Analysis of SSPLNet Performance confirms that SSPLNet's lower error rate is not due to chance. The bootstrap results also confirm that SSPLNet's reported accuracy falls well within the 95% CI of the sampling distribution.
- Research Article
- 10.1016/j.jtho.2016.11.145
- Jan 1, 2017
- Journal of Thoracic Oncology
MTE11.01 The Clinical Impact of the 2015 WHO Classification of Lung Tumors
- Research Article
1
- 10.1016/j.prro.2018.07.002
- Jul 18, 2018
- Practical Radiation Oncology
Central3D: A Computer Tool to Help Clinicians Differentiate Central and Peripheral Lung Tumors
- Research Article
- 10.3390/diagnostics15141782
- Jul 15, 2025
- Diagnostics (Basel, Switzerland)
Background/Objectives: Accurate classification of brain tumors is critical for treatment planning and prognosis. While deep convolutional neural networks (CNNs) have shown promise in medical imaging, few studies have systematically compared multiple architectures or integrated ensemble strategies to improve diagnostic performance. This study aimed to evaluate various CNN models and optimize classification performance using a majority voting ensemble approach on T1-weighted MRI brain images. Methods: Seven pretrained CNN architectures were fine-tuned to classify four categories: glioblastoma, meningioma, pituitary adenoma, and no tumor. Each model was trained using two optimizers (SGDM and ADAM) and evaluated on a public dataset split into training (70%), validation (10%), and testing (20%) subsets, and further validated on an independent external dataset to assess generalizability. A majority voting ensemble was constructed by aggregating predictions from all 14 trained models. Performance was assessed using accuracy, Kappa coefficient, true positive rate, precision, confusion matrix, and ROC curves. Results: Among individual models, GoogLeNet and Inception-v3 with ADAM achieved the highest classification accuracy (0.987). However, the ensemble approach outperformed all standalone models, achieving an accuracy of 0.998, a Kappa coefficient of 0.997, and AUC values above 0.997 for all tumor classes. The ensemble demonstrated improved sensitivity, precision, and overall robustness. Conclusions: The majority voting ensemble of diverse CNN architectures significantly enhanced the performance of MRI-based brain tumor classification, surpassing that of any single model. These findings underscore the value of model diversity and ensemble learning in building reliable AI-driven diagnostic tools for neuro-oncology.
- Research Article
- 10.1007/s10278-024-01199-3
- Jul 26, 2024
- Journal of imaging informatics in medicine
The analysis of medical images (MI) is an important part of advanced medicine as it helps detect and diagnose various diseases early. Classifying brain tumors through magnetic resonance imaging (MRI) poses a challenge demanding accurate models for effective diagnosis and treatment planning. This paper introduces AG-MSTLN-EL, an attention-aided multi-source transfer learning ensemble learning model leveraging multi-source transfer learning (Visual Geometry Group ResNet and GoogLeNet), attention mechanisms, and ensemble learning to achieve robust and accurate brain tumor classification. Multi-source transfer learning allows knowledge extraction from diverse domains, enhancing generalization. The attention mechanism focuses on specific MRI regions, increasing interpretability and classification performance. Ensemble learning combines k-nearest neighbor, Softmax, and support vector machine classifiers, improving both accuracy and reliability. Evaluating the model's performance on a dataset with 3064 brain tumor MRI images, AG-MSTLN-EL outperforms state-of-the-art models in terms of all classification measures. The model's innovative combination of transfer learning, attention mechanism, and ensemble learning provides a reliable solution for brain tumor classification. Its superior performance and high interpretability make AG-MSTLN-EL a valuable tool for clinicians and researchers in medical image analysis.
- Research Article
2
- 10.1088/1402-4896/ad591b
- Jun 27, 2024
- Physica Scripta
Accurate detection and classification of brain tumors play a critical role in neurological diagnosis and treatment.Proposed work developed a sophisticated technique to precisely identify and classify brain neoplasms in medical imaging. Our approach integrates various techniques, including Otsu’s thresholding, anisotropic diffusion, modified 3-category Fuzzy C-Means (FCM) for segmentation after skull stripping and wavelet transformation for post-processing for segmentation, and Convolution neural networks for classification. This approach not only recognizes that discriminating healthy brain tissue from tumor-affected areas is challenging, yet it also focuses on finding abnormalities inside brain tumors and early detection of tiny tumor structures. Initial preprocessing stages improve the visibility of images and the identification of various regions while accurately classifying tumor locations into core, edema, and enhancing regions by segmentation as well. Ultimately, these segmented zones are refined using wavelet transforms, which remove noise and improve feature extraction. Our CNN architecture uses learned abstractions to distinguish between healthy and malignant regions, ensuring robust classification. It is particularly good at identifying tiny tumors and detecting anomalies inside tumor regions, which provides substantial advances in accurate tumor detection. Comprehensive hypothetical evaluations validate its efficacy, which could improve clinical diagnostics and perhaps influence brain tumor research and treatment approaches.
- Research Article
1
- 10.17762/ijritcc.v11i8.7929
- Sep 20, 2023
- International Journal on Recent and Innovation Trends in Computing and Communication
The use of deep learning techniques for White Blood Cell (WBC) classification has garnered significant attention on medical image analysis due to its potential to automate and enhance the accuracy of WBC classification, which is critical for disease diagnosis and infection detection. Convolutional neural networks (CNNs) have revolutionized image analysis tasks, including WBC classification effectively capturing intricate spatial patterns and distinguishing between different cell types. A key advantage of deep learning-based WBC classification is its capability to handle large datasets, enabling models to learn the diverse variations and characteristics of different cell types. This facilitates robust generalization and accurate classification of previously unseen samples. In this paper, a novel approach called Red Deer Optimization with Deep Learning for Robust White Blood Cell Detection and Classification was presented. The proposed model incorporates various components to improve performance and robustness. Image pre-processing involves the utilization of median filtering, while U-Net++ is employed for segmentation, facilitating accurate delineation of WBCs. Feature extraction is performed using the Xception model, which effectively captures informative representations of the WBCs. For classification, BiGRU model is employed, leveraging its ability to model temporal dependencies in the WBC sequences. To optimize the performance of the BiGRU model, the RDO is utilized for parameter tuning, resulting in enhanced accuracy and faster convergence of the deep learning models. The integration of RDO contributes to more reliable detection and classification of WBCs, further improving the overall performance and robustness of the approach. Experimental results demonstrate the superiority of our Red Deer Optimization with deep learning-based approach over traditional methods and standalone deep learning models in achieving robust WBC detection and classification. This research highlights the possibility of combining deep learning techniques with optimization algorithms for improving WBC analysis, offering valuable insights for medical professionals and medical image analysis.
- Research Article
4
- 10.1109/access.2024.3415482
- Jan 1, 2024
- IEEE Access
Breast cancer, a global health concern, demands innovative diagnostic approaches. The potential of AI (Artificial Intelligence) and ML (Machine Learning) in breast cancer diagnosis warrants exploration alongside conventional methods. Our method partitions breast cancer images into four regions, employing transfer learning with ResNet50 and VGG16 for feature extraction in each region. Extracted features are consolidated and fed into an Extra Tree Classifier. Additionally, an ensemble learning framework combines logistic regression, SVM (Support Vector Machine), Extra Tree Classifier, and Ridge Classifier outputs, harnessing the strengths of each for robust breast cancer image classification. Among five machine learning classification models -Extra Tree Classifier, Logistic Regression, Ridge Classifier, SVM, and Voting Classifierthe goal was to determine the most effective in terms of accuracy. Surprisingly, the Voting Classifier emerged as the top performer with an impressive accuracy of 96.86% across these carcinoma classes, validating the approach's effectiveness. The Extra Tree Classifier followed with an accuracy of 89.66%, while the Ridge Classifier closely trailed at 88.74%. Additionally, Logistic Regression exhibited a notable accuracy rate of 91.42%, and the SVM model achieved a commendable accuracy of 91.44%. This approach integrates deep learning's feature extraction power with the interpretability of traditional models. Results demonstrate our method's efficacy in classifying ductal, lobular, and papillary cancers. The suggested method offers a variety of advantages, including early-stage identification, increased precision, customized medical advice, and simplified analysis by combining feature extraction with ensemble learning. Ongoing research aims to refine algorithms, leading to earlier detection and improved outcomes. This innovative approach has the potential to revolutionize breast cancer care and fundamentally reshape treatment strategies.
- Research Article
18
- 10.3390/a17060221
- May 21, 2024
- Algorithms
The accurate classification of brain tumors is an important step for early intervention. Artificial intelligence (AI)-based diagnostic systems have been utilized in recent years to help automate the process and provide more objective and faster diagnosis. This work introduces an enhanced AI-based architecture for improved brain tumor classification. We introduce a hybrid architecture that integrates vision transformer (ViT) and deep neural networks to create an ensemble classifier, resulting in a more robust brain tumor classification framework. The analysis pipeline begins with preprocessing and data normalization, followed by extracting three types of MRI-derived information-rich features. The latter included higher-order texture and structural feature sets to harness the spatial interactions between image intensities, which were derived using Haralick features and local binary patterns. Additionally, local deeper features of the brain images are extracted using an optimized convolutional neural networks (CNN) architecture. Finally, ViT-derived features are also integrated due to their ability to handle dependencies across larger distances while being less sensitive to data augmentation. The extracted features are then weighted, fused, and fed to a machine learning classifier for the final classification of brain MRIs. The proposed weighted ensemble architecture has been evaluated on publicly available and locally collected brain MRIs of four classes using various metrics. The results showed that leveraging the benefits of individual components of the proposed architecture leads to improved performance using ablation studies.
- Conference Article
- 10.1109/ecce64574.2025.11013470
- Feb 13, 2025
Robust Multiclass Brain Tumor Classification: Leveraging Swin Transformer and Feature Optimization with Ensemble Learning
- Research Article
32
- 10.1007/s13369-014-1334-x
- Aug 26, 2014
- Arabian Journal for Science and Engineering
Image segmentation is to recognize structures in the image that are expected to signify scene objects. It is widely used by the radiologists to segment the medical images into meaningful regions. Thus, various segmentation techniques in medical imaging depending on the region of interest had been proposed. In this article, a robust brain tumor classification method is proposed, which focuses on the structural analysis on both tumorous and normal tissues. The proposed system consists of preprocessing, segmentation, feature extraction and classification. In preprocessing steps, anisotropic filter is used to eliminate the noise and enhances the image quality for skull-stripping process. In feature extraction, some specific features are extracted using texture as well from intensity using modified multi-texton structure descriptor. The hybrid kernel is designed in the classification stage and applied to training of support vector machine to perform automatic classification of tumor in magnetic resonance imaging (MRI) images. For comparative analysis, the proposed method is compared with the existing works using k-fold cross-validation method. The accuracy level (93 %) for our proposed approach (αK1, K1 + K2, K1 * K2) proved is good at detecting the tumors in the brain MRI images.
- New
- Research Article
- 10.1007/s41237-025-00274-5
- Nov 6, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00275-4
- Sep 3, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00270-9
- Aug 9, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00269-2
- Aug 5, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00267-4
- Jul 18, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00268-3
- Jul 18, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00266-5
- Jul 15, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00265-6
- Jul 5, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00260-x
- Jul 5, 2025
- Behaviormetrika
- Research Article
- 10.1007/s41237-025-00263-8
- Jun 14, 2025
- Behaviormetrika
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.