DEEP LEARNING BASED MODEL FOR AUTOMATIC HEART TUMOR SEGMENTATION IN CT SCAN IMAGES

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Cardiac tumors present significant challenges in terms of early diagnosis and treatment planning due to their low incidence and complex anatomical location. Precise segmentation of CT scan images plays an essential role in enhancing the efficacy of diagnosis and clinical decision support. This paper introduces a deep learning-oriented automatic segmentation pipeline for identifying and outlining cardiac tumors with high accuracy. The proposed pipeline starts with the preprocessing of the CT scans, such as intensity normalization, denoising, resampling, and cropping, to make the images uniform and receptive to better visualize the tumors. A CNN encoder–decoder architecture motivated from U-Net is adopted to extract multiscale spatial and contextual cues for robust and dense tumor segmentation, and skip connections maintain structural cues during the process of reconstruction. To achieve optimal learning and compensate for class imbalance, a hybrid loss function integrating Binary Cross-Entropy, Soft Dice Loss, and Focal Tversky Loss is utilized. Furthermore, the traditional machine learning classifiers Naïve Bayes and K-Nearest Neighbors are used in addition to the outputs of CNN for complementary classification and generalizability improvement. The pipeline was trained and tested on a CT-based image dataset for 70:10:20 partitioning, and experimental results revealed robust performance, obtaining an accuracy of 97.8%, precision of 95.71%, recall of 93.62%, F1-score of 94.63%, Dice coefficient of 94.52%, IoU of 91.83%, and AUC of 96.45%. Overlay of segmentation revealed precise delimitation of tumoral outlines, and ROC curve analysis further confirmed model robustness, too. These results proved the efficacy of the proposed CNN encoder–decoder pipeline with hybrid loss optimization and complementary classifiers, qualifying it as a reliable and clinically feasible method for automatic segmentation of cardiac tumors in CT scans.

Similar Papers
  • Research Article
  • 10.36548/jiip.2025.3.016
Multi-Class Heart Disease Detection using ECG Images via Deep CNN Feature Extraction and Ensemble Stacking
  • Sep 1, 2025
  • Journal of Innovative Image Processing
  • Nomula Nagarjuna Reddy + 4 more

Cardiovascular diseases (CVDs) continue to be the number one cause of mortality across the globe, illustrating the need for trustworthy and automated diagnostic methods. Electrocardiogram (ECG) analysis is a traditional method to identify cardiac abnormalities but the existing methods based on single convolutional neural networks (CNNs) or traditional machine learning (ML) classifiers suffer from overfitting, generalizing across different datasets, and addressing class imbalance, which in turn presents a barrier to developing robust systems with clinical deployment intent. This research addresses these issues by using a hybrid ensemble framework for multi-class ECG image classification. Our hybrid ensemble framework follows the approach of using transfer learning from CNNs (VGG16, VGG19, ResNet50, and InceptionV3) for deep feature extraction, applying dimensionality reduction (via Principal Components Analysis) on the reduced features, and then classifying them using a stacking ensemble of Random Forest, XGBoost, LightGBM, Multilayer Perceptron (MLP), and Support Vector Machine (SVM), with Logistic Regression serving as the meta-learner. We augmented the classes by applying the Synthetic Minority Over-sampling Technique (SMOTE) to handle imbalanced datasets. Our trials on datasets from Pakistan, Mendeley, and Bangladesh verified the effectiveness of our model, as it scored 97.6% on accuracy, 97.59% on the F1 score, and 0.9992 on the macro-AUC score, continuously performing better than both traditional ML classifiers and individual CNNs. The findings indicate that CNN-derived features combined with different ML classifiers improve the robustness of the model, its scalability, and its ability to generalize across clinical datasets. They underscore the role of the proposed model in performing disease diagnosis in real-time from an ECG and act as part of the advanced clinical decision support.

  • Research Article
  • Cite Count Icon 60
  • 10.1007/s11548-011-0649-2
3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set
  • Aug 11, 2011
  • International Journal of Computer Assisted Radiology and Surgery
  • Karteek Popuri + 3 more

Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational segmentation method was able to better disambiguate the tumor from the surrounding tissue.

  • Research Article
  • Cite Count Icon 4
  • 10.1038/s41598-025-07427-2
Enhanced EEG signal classification in brain computer interfaces using hybrid deep learning models
  • Jul 25, 2025
  • Scientific Reports
  • Abir Das + 4 more

Brain-computer interfaces (BCIs) establish a communication pathway between the human brain and external devices by decoding neural signals. This study focuses on enhancing the classification of Motor Imagery (MI) within BCI systems by leveraging advanced machine learning and deep learning techniques. The accurate classification of electroencephalogram (EEG) data is crucial for enhancing BCI performance. The BCI architecture processes electroencephalography signals through three critical stages: data pre-processing, feature extraction, and classification. The research evaluates the performance of five traditional machine learning classifiers- K-Nearest Neighbors (KNN), Support Vector Classifier (SVC), Logistic Regression (LR), Random Forest (RF), and Naive Bayes (NB)-using the “PhysioNet EEG Motor Movement/Imagery Dataset”. This dataset encompasses EEG data from various motor tasks, including both actual and imagined movements. Among the traditional classifiers, Random Forest achieved the highest accuracy of 91%, underscoring its efficacy in motor imagery classification within BCI systems. In addition to conventional approaches, the study also explores deep learning techniques, with Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks yielding accuracies of 88.18% and 16.13%, respectively. However, the proposed hybrid model, which synergistically combines CNN and LSTM, significantly surpasses both traditional machine learning and individual deep learning methods, achieving an exceptional accuracy of 96.06%. This substantial improvement highlights the potential of hybrid deep learning models to advance the state of the art in BCI systems, offering a more robust and precise approach to motor imagery classification.

  • Research Article
  • Cite Count Icon 15
  • 10.1007/s11906-020-01083-9
Clinical Decision Support for the Diagnosis and Management of Adult and Pediatric Hypertension.
  • Aug 27, 2020
  • Current Hypertension Reports
  • Suchith Vuppala + 1 more

Purpose of ReviewTo review literature from 2016 to 2019 on clinical decision support (CDS) for diagnosis and management of hypertension in children and adults.Recent FindingsTen studies described hypertension CDS systems. Novel advances included the integration of patient-collected blood pressure data, automated information retrieval and management support, and use of CDS in low-resource/developing-world settings and in pediatrics. Findings suggest that CDS increases hypertension detection/control, yet many children and adults with hypertension remain undetected or undercontrolled. CDS challenges included poor usability (from lack of health record integration, excessive data entry requests, and wireless connectivity challenges) and lack of clinician trust in blood pressure measures.SummaryHypertension CDS has improved but not closed gaps in the detection and control of hypertension in children and adults. The studies reviewed indicate that the usability of CDS and the system where CDS is deployed (e.g., commitment to high-quality blood pressure measurement/infrastructure) may impact CDS’s ability to increase hypertension detection and control.

  • Research Article
  • Cite Count Icon 2
  • 10.1007/s10334-014-0472-1
Automatic segmentation of subcutaneous mouse tumors by multiparametric MR analysis based on endogenous contrast.
  • Nov 27, 2014
  • Magnetic Resonance Materials in Physics, Biology and Medicine
  • Stefanie J C G Hectors + 3 more

Contrast-enhanced T1-weighted imaging is usually included in MRI procedures for automatic tumor segmentation. Use of an MR contrast agent may not be appropriate for some applications, however. We assessed the feasability of automatic tumor segmentation by multiparametric cluster analysis that uses intrinsic MRI contrast only. Multiparametric MRI consisting of quantitative T1, T2, and apparent diffusion coefficient (ADC) mapping was performed in mice bearing subcutaneous tumors (n = 21). k-means and fuzzy c-means clustering with all possible combinations of MRI parameters, i.e. feature vectors, and 2-7 clusters were performed on the multiparametric data. Clusters associated with tumor tissue were selected on the basis of the relative signal intensity of tumor tissue in T2-weighted images. The optimum segmentation method was determined by quantitative comparison of automatic segmentation with manual segmentation performed by three observers. In addition, the automatically segmented tumor volumes from seven separate tumor data sets were quantitatively compared with histology-derived tumor volumes. The highest similarity index between manual and automatic segmentation (SI manual,automatic = 0.82 ± 0.06) was observed for k-means clustering with feature vector {T2, ADC} and four clusters. A strong linear correlation between automatically and manually segmented tumor volumes (R (2) = 0.99) was observed for this segmentation method. Automatically segmented tumor volumes also correlated strongly with histology-derived tumor volumes (R (2) = 0.96). Automatic segmentation of mouse subcutaneous tumors can be achieved on the basis of endogenous MR contrast only.

  • Research Article
  • Cite Count Icon 31
  • 10.1155/2014/401201
Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.
  • Jan 1, 2014
  • Computational and Mathematical Methods in Medicine
  • Yu Guo + 6 more

The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  • Research Article
  • 10.1200/po-25-00930
Foundation Model Based on Routine Magnetic Resonance Imaging for Brain Tumor Molecular Profiling and Progression Prediction.
  • Feb 1, 2026
  • JCO precision oncology
  • Junxian Li + 5 more

To build a self-supervised magnetic resonance imaging (MRI) foundation model from routine clinical scans and to test whether it can support key glioma-related applications, including post-therapy imaging outcome characterization and molecular marker inference. We created the Unified Multimodal Brain Imaging Foundation (UMBIF) model and pretrained it in a self-supervised manner using 51,029 routine brain MRI examinations collected across multiple institutions. Pretraining used a hybrid objective that couples masked-image reconstruction with contrastive representation learning to encourage anatomically and clinically informative embeddings. The pretrained UMBIF encoder was then adapted to downstream multicenter data sets to predict (1) post-treatment radiographic outcomes and (2) molecular biomarkers, including IDH mutation, MGMT promoter methylation, and 1p/19q codeletion. Performance was benchmarked against commonly used convolutional networks and traditional machine learning classifiers, using accuracy, sensitivity, specificity, and receiver operating characteristic-AUC as primary metrics. Relative to self-supervised initialization derived from natural-image corpora or from approaches emphasizing only large tumor-area crops (self-supervised learning [SSL]-ImageNet and SSL-Cerebral), the UMBIF encoder-decoder design captured richer, more task-relevant features and consistently improved downstream discrimination. The best pretrained model achieved an accuracy of 0.899 (AUC, 0.815) for post-treatment radiographic outcome characterization. For molecular profiling, it reached accuracies/AUCs of 0.898/0.916 for 1p/19q codeletion, 0.829/0.896 for IDH mutation status, and 0.905/0.859 for MGMT promoter methylation, indicating strong potential utility in clinical decision support. UMBIF showed robust transferability to both post-therapy imaging assessment and molecular status prediction in glioma. By leveraging large-scale self-supervised pretraining to boost performance while reducing dependence on manual annotations, the framework may facilitate more efficient and reliable diagnostic workflows.

  • Abstract
  • 10.1016/j.ijrobp.2022.07.1073
An Across Feature Map Attention-Based Deep Learning Method for Small Liver Tumor Segmentation in CT Scans
  • Oct 22, 2022
  • International Journal of Radiation Oncology*Biology*Physics
  • S Sang + 1 more

An Across Feature Map Attention-Based Deep Learning Method for Small Liver Tumor Segmentation in CT Scans

  • Book Chapter
  • 10.1007/978-3-030-68663-5_5
Automatic Tumor Segmentation in Mammogram Images for Healthcare Systems in Smart Cities
  • Jul 2, 2021
  • Alberto Ochoa-Zezzatti + 1 more

Breast cancer is one of the leading causes of cancer in women, thus this is an important issue for smart cities health systems. The aiming of smart cities is to offer a better quality of life of its citizens, by means of technology thus it is important to develop methods to diagnose breast tumors in shorter periods of time and with high degree of confidence. For this end, in this chapter, it is proposed to develop a new method for automatic segmentation of breast cancer tumors using deep learning, in order to be part of a modules focused on detection and automatic diagnosis or as an aid to medical staff as a support for diagnosis.KeywordsBreast cancerDeep learningSmart cities

  • Research Article
  • 10.32446/0368-1025it.2024-11-45-52
Artificial intelligence in oncourology: integrated deep learning technologies in the tasks of segmentation of three-dimensional images of kidney tumors
  • Jan 19, 2025
  • Izmeritel`naya Tekhnika
  • Valentin G Nikitaev + 6 more

In order to improve the accuracy of cancer diagnosis, a new convolutional neural network architecture is presented, which provides automatic segmentation and detection of kidney tumors on three-dimensional images obtained by computed tomography. The proposed approach is based on the integration of three complementary technologies: multilevel convolutional processing, residual connections and U-Net architectural principles. This approach ensures efficient processing of voluminous medical data. An original neural network system for segmentation of kidney images obtained by computed tomography and detection of kidney tumors has been built. To validate the system, a comprehensive experiment was conducted using the publicly available KiTS19 dataset (Kidney Tumor Segmentation 2019) provided by the University of Minnesota Clinic through the Grand Challenge platform. The dataset includes 300 labeled images of kidneys obtained by computed tomography, with confirmed diagnoses. The experiment consisted of the following stages: dataset preprocessing, including normalization and augmentation; system training for 210 cases; validation on an independent sample of 90 cases. The results of the experiment demonstrate the high diagnostic efficiency of the system: the accuracy of automatic segmentation of anatomical structures of the kidneys was 96 % (according to the Dice coefficient); the accuracy of detection and segmentation of tumor formations has reached 91 % (according to the Dice coefficient). The results obtained can be applied in the following areas of clinical practice: preoperative planning and navigation during organ – preserving operations; automated screening of studies using computed tomography for early detection of kidney tumors; quantitative assessment of the dynamics of tumor growth in monitoring the course of the disease; support for clinical decision-making in oncourology.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 222
  • 10.1038/s41598-018-33860-7
Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing
  • Oct 19, 2018
  • Scientific Reports
  • Grzegorz Chlebus + 5 more

Automatic liver tumor segmentation would have a big impact on liver therapy planning procedures and follow-up assessment, thanks to standardization and incorporation of full volumetric information. In this work, we develop a fully automatic method for liver tumor segmentation in CT images based on a 2D fully convolutional neural network with an object-based postprocessing step. We describe our experiments on the LiTS challenge training data set and evaluate segmentation and detection performance. Our proposed design cascading two models working on voxel- and object-level allowed for a significant reduction of false positive findings by 85% when compared with the raw neural network output. In comparison with the human performance, our approach achieves a similar segmentation quality for detected tumors (mean Dice 0.69 vs. 0.72), but is inferior in the detection performance (recall 63% vs. 92%). Finally, we describe how we participated in the LiTS challenge and achieved state-of-the-art performance.

  • Research Article
  • 10.1016/j.compbiolchem.2025.108627
AI-driven diagnosis of Lassa fever: Evidence from Nigerian clinical records.
  • Feb 1, 2026
  • Computational biology and chemistry
  • Adebimpe Esan + 5 more

AI-driven diagnosis of Lassa fever: Evidence from Nigerian clinical records.

  • Research Article
  • Cite Count Icon 71
  • 10.1016/j.cmpb.2020.105809
Fast level set method for glioma brain tumor segmentation based on Superpixel fuzzy clustering and lattice Boltzmann method
  • Oct 16, 2020
  • Computer Methods and Programs in Biomedicine
  • Asieh Khosravanian + 3 more

Fast level set method for glioma brain tumor segmentation based on Superpixel fuzzy clustering and lattice Boltzmann method

  • Research Article
  • Cite Count Icon 8
  • 10.1016/j.bjoms.2023.12.017
Implementing a deep learning model for automatic tongue tumour segmentation in ex-vivo 3-dimensional ultrasound volumes
  • Jan 3, 2024
  • The British journal of oral & maxillofacial surgery
  • N.M Bekedam + 6 more

Implementing a deep learning model for automatic tongue tumour segmentation in ex-vivo 3-dimensional ultrasound volumes

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 37
  • 10.1038/s41598-022-16388-9
Improving automatic liver tumor segmentation in late-phase MRI using multi-model training and 3D convolutional neural networks
  • Jul 18, 2022
  • Scientific Reports
  • Annika Hänsch + 7 more

Automatic liver tumor segmentation can facilitate the planning of liver interventions. For diagnosis of hepatocellular carcinoma, dynamic contrast-enhanced MRI (DCE-MRI) can yield a higher sensitivity than contrast-enhanced CT. However, most studies on automatic liver lesion segmentation have focused on CT. In this study, we present a deep learning-based approach for liver tumor segmentation in the late hepatocellular phase of DCE-MRI, using an anisotropic 3D U-Net architecture and a multi-model training strategy. The 3D architecture improves the segmentation performance compared to a previous study using a 2D U-Net (mean Dice 0.70 vs. 0.65). A further significant improvement is achieved by a multi-model training approach (0.74), which is close to the inter-rater agreement (0.78). A qualitative expert rating of the automatically generated contours confirms the benefit of the multi-model training strategy, with 66 % of contours rated as good or very good, compared to only 43 % when performing a single training. The lesion detection performance with a mean F1-score of 0.59 is inferior to human raters (0.76). Overall, this study shows that correctly detected liver lesions in late-phase DCE-MRI data can be automatically segmented with high accuracy, but the detection, in particular of smaller lesions, can still be improved.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.