LivXAI-Net: An explainable AI framework for liver disease diagnosis with IoT-based real-time monitoring support.

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

LivXAI-Net: An explainable AI framework for liver disease diagnosis with IoT-based real-time monitoring support.

ReferencesShowing 10 of 56 papers
  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 36
  • 10.1186/s12933-024-02343-7
Association between triglyceride-glucose related indices and mortality among individuals with non-alcoholic fatty liver disease or metabolic dysfunction-associated steatotic liver disease
  • Jul 4, 2024
  • Cardiovascular Diabetology
  • Qingling Chen + 11 more

  • Open Access Icon
  • Cite Count Icon 18
  • 10.3390/bios12020070
Detection of Liver Dysfunction Using a Wearable Electronic Nose System Based on Semiconductor Metal Oxide Sensors.
  • Jan 26, 2022
  • Biosensors
  • Andreas Voss + 7 more

  • Open Access Icon
  • Cite Count Icon 517
  • 10.1145/3561048
Explainable AI (XAI): Core Ideas, Techniques, and Solutions
  • Jan 16, 2023
  • ACM Computing Surveys
  • Rudresh Dwivedi + 10 more

  • Open Access Icon
  • Cite Count Icon 6
  • 10.1049/cit2.12409
Tri‐M2MT: Multi‐modalities based effective acute bilirubin encephalopathy diagnosis through multi‐transformer using neonatal Magnetic Resonance Imaging
  • Feb 12, 2025
  • CAAI Transactions on Intelligence Technology
  • Kumar Perumal + 3 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 39
  • 10.1155/2021/4931450
Ensemble of Deep Learning Based Clinical Decision Support System for Chronic Kidney Disease Diagnosis in Medical Internet of Things Environment.
  • Jan 1, 2021
  • Computational Intelligence and Neuroscience
  • Suliman A Alsuhibany + 6 more

  • Cite Count Icon 2
  • 10.1016/j.engappai.2025.110138
An explainable artificial intelligence and Internet of Things framework for monitoring and predicting cardiovascular disease
  • Mar 1, 2025
  • Engineering Applications of Artificial Intelligence
  • Mubarak Albarka Umar + 3 more

  • Cite Count Icon 7
  • 10.1109/iotm.001.2300138
Artificial Intelligence Empowered Digital Twin and NFT-Based Patient Monitoring and Assisting Framework for Chronic Disease Patients
  • Mar 1, 2024
  • IEEE Internet of Things Magazine
  • Siva Sai + 3 more

  • Open Access Icon
  • Cite Count Icon 134
  • 10.1002/hep.31558
Risk Prediction Models for Post-Operative Mortality in Patients With Cirrhosis.
  • Dec 10, 2020
  • Hepatology
  • Nadim Mahmud + 9 more

  • Open Access Icon
  • Cite Count Icon 368
  • 10.1002/hep.20750
Steatosis: Co-factor in Other Liver Diseases *
  • Jan 1, 2005
  • Hepatology
  • Elizabeth E Powell + 2 more

  • Open Access Icon
  • Cite Count Icon 10
  • 10.1109/access.2023.3329759
Explainable AI for Enhanced Interpretation of Liver Cirrhosis Biomarkers
  • Jan 1, 2023
  • IEEE Access
  • Greeshma Arya + 5 more

Similar Papers
  • Research Article
  • Cite Count Icon 60
  • 10.1016/j.habitatint.2022.102660
An explainable model for the mass appraisal of residences: The application of tree-based Machine Learning algorithms and interpretation of value determinants
  • Aug 31, 2022
  • Habitat International
  • Muzaffer Can Iban

An explainable model for the mass appraisal of residences: The application of tree-based Machine Learning algorithms and interpretation of value determinants

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.imed.2024.09.005
Blood pressure abnormality detection and Interpretation utilizing Explainable Artificial Intelligence
  • Feb 1, 2025
  • Intelligent Medicine
  • Hedayetul Islam + 2 more

Blood pressure abnormality detection and Interpretation utilizing Explainable Artificial Intelligence

  • Research Article
  • 10.47482/acmr.1677545
Predicting recurrence of differentiated thyroid cancer with an explainable artificial intelligence model
  • Sep 28, 2025
  • Archives of Current Medical Research
  • Ahmet Cankat Öztürk + 2 more

Background: This study aimed to predict the recurrence of differentiated thyroid cancer and identify its most representative risk factors using an explainable artificial intelligence model. Methods: The publicly available Differentiated Thyroid Cancer Recurrence dataset from the University of California Irvine Machine Learning Repository, comprising 383 patients and 17 features, was employed. Five classifiers, -Random Forest, Gradient Boosting, AdaBoost, Support Vector Classifier and Logistic Regression-, were employed to predict the recurrence. Permutation feature importance (PFI) and SHapley Additive exPlanations (SHAP) explainable artificial intelligence methods were used to determine the features that had the most impact on the prediction result. Results: The Random Forest algorithm outperformed others, achieving an accuracy of 97.39% and an Area under the Curve of 0.993. Response to treatment, ATA risk stratification, tumor stage and patient age were determined as the factors with the highest contribution to the model prediction process through SHAP and permutation importance analyses, and this finding was consistent with the prognostic markers stated in the literature. Conclusion: The proposed explainable machine learning framework has shown satisfactory results in predicting DTC recurrence while identifying clinically important features. This approach can offer valuable support to clinicians in early identification of high-risk patients and personalization of surveillance strategies.

  • Research Article
  • Cite Count Icon 1
  • 10.1186/s12911-025-02874-3
Explainable AI for enhanced accuracy in malaria diagnosis using ensemble machine learning models
  • Apr 11, 2025
  • BMC Medical Informatics and Decision Making
  • Olushina Olawale Awe + 4 more

BackgroundMalaria, an infectious disease caused by protozoan parasites belonging to the Plasmodium genus, remains a significant public health challenge, with African regions bearing the heaviest burden. Machine learning techniques have shown great promise in improving the diagnosis of infectious diseases, such as malaria.ObjectivesThis study aims to integrate ensemble machine learning models and Explainable Artificial Intelligence (XAI) frameworks to enhance the diagnosis accuracy of malaria.MethodsThe study utilized a dataset from the Federal Polytechnic Ilaro Medical Centre, Ilaro, Ogun State, Nigeria, which includes information from 337 patients aged between 3 and 77 years (180 females and 157 males) over a 4-week period. Ensemble methods, namely Random Forest, AdaBoost, Gradient Boost, XGBoost, and CatBoost, were employed after addressing class imbalance through oversampling techniques. Explainable AI techniques, such as LIME, Shapley Additive Explanations (SHAP) and Permutation Feature Importance, were utilized to enhance transparency and interpretability.ResultsAmong the ensemble models, Random Forest demonstrated the highest performance with an ROC AUC score of 0.869, followed closely by CatBoost at 0.787. XGBoost, Gradient Boost, and AdaBoost achieved ROC AUC scores of 0.770, 0.747, and 0.633, respectively. These methods evaluated the influence of different characteristics on the probability of malaria diagnosis, revealing critical features that contribute to prediction outcomes.ConclusionBy integrating ensemble machine learning models with explainable AI frameworks, the study promoted transparency in decision-making processes, thereby empowering healthcare providers with actionable insights for improved treatment strategies and enhanced patient outcomes, particularly in malaria management.

  • Research Article
  • 10.1200/jco.2024.42.4_suppl.688
Predicting efficacy in patients with locally advanced (LA)/metastatic urothelial carcinoma (mUC) treated with avelumab using machine learning and explainability approaches.
  • Feb 1, 2024
  • Journal of Clinical Oncology
  • Patrizia Giannatempo + 19 more

688 Background: Internationally avelumab is approved as maintenance therapy for patients (pts) with LA/mUC whose disease did not progress after 1L platinum-based chemotherapy. However, 54% of pts progressed on avelumab. Limited data are available on predictive biomarker of efficacy. Artificial intelligence (AI) methods are being increasingly investigated to generate predictive models applicable in clinical practice. In this study, we developed a set of machine learning (ML) classifiers and survival analysis algorithms using real-world data to predict response and progression free survival (PFS) in LA/mUC patients treated with avelumab. We also applied explainability to the developed algorithms. Methods: We prospectively collected real-world data from 115 pts receiving Avelumab from 2021 to 2022 treated in 20 institutions in Italy (MALVA dataset). In order to predict the efficacy of immunotherapy (IO), 2 different outcomes were studied: Objective Response Rate (ORR) and Progression Free Survival (PFS). The dataset was split between training and test set, with a 80%-20% ratio.The missing values were imputed using a Bayesian Ridge iterative imputer, fitted on the training set. Eight different classifier models were used for ORR: XGBoost (XGB), Logistic Regression (LR), Random Forest (RF), Multilayer Perceptron (MLP), Support Vector Machine (SVM), Adaboost (AB), Extra Trees (ET) and LightGBM (LGBM). Five ML survival analysis models were used to analyse PFS: Cox Proportional Hazards (CPH), Random Survival Forest (RSF), Gradient Boosting (GB), Extra Survival Trees (EST) and Survival Support Vector Machine (SSVM). Finally, SHAP values, an eXplainable AI (XAI) technique, were calculated to evaluate each feature and to explain the predictions. Results: According to clinical expertise, 31 features were selected through a clinical hypothesis. For ORR prediction, the two best performing models were XGB and ET, both without using oversampling. On the test set, XGB achieved a F1 score of 0.77, accuracy of 0.77 and AUC of 0.81, while ET reached F1 score and accuracy of 0.81 and AUC of 0.80. Regarding the prediction of PFS, EST and RSF obtained the best performances with a c-index of 0.71 and 0.72, and Average AUC of 0.75 and 0.76, respectively. According to SHAP, the most important feature for predicting ORR was: ORR after 1st line CHT, while bone metastases, absolute leukocytes number at baseline and ECOG PS were the most important features for the PFS prediction. Conclusions: Machine learning is useful to predict efficacy in advanced urothelial carcinoma. The explainability models confirmed what have been discovered within the last years of immune-research conferring trustworthiness to the ML models. Further validation of these approaches on larger and external pts cohorts is needed.

  • Research Article
  • 10.3390/ph18111659
Machine Learning-Integrated Explainable Artificial Intelligence Approach for Predicting Steroid Resistance in Pediatric Nephrotic Syndrome: A Metabolomic Biomarker Discovery Study
  • Nov 1, 2025
  • Pharmaceuticals
  • Fatma Hilal Yagin + 5 more

Aim: Nephrotic syndrome (NS) represents a complex glomerular disorder with significant clinical heterogeneity across pediatric and adult populations. Although glucocorticosteroids have constituted the mainstay of therapeutic intervention for more than six decades, primary treatment resistance manifests in approximately 20% of pediatric patients and 50% of adult cohorts. Steroid-resistant nephrotic syndrome (SRNS) is associated with substantially greater morbidity compared to steroid-sensitive nephrotic syndrome (SSNS), characterized by both iatrogenic glucocorticoid toxicity and progressive nephron loss with attendant decline in renal function. Based on this, the current study aims to develop a robust machine learning (ML) model integrated with explainable artificial intelligence (XAI) to distinguish SRNS and identify important biomarker candidate metabolites. Methods: In the study, biomarker candidate compounds obtained from proton nuclear magnetic resonance (1 H NMR) metabolomics analyses on plasma samples taken from 41 patients with NS (27 SSNS and 14 SRNS) were used. We developed ML models to predict steroid resistance in pediatric NS using metabolomic data. After preprocessing with MICE-LightGBM imputation for missing values (<30%) and standardization, the dataset was randomly split into training (80%) and testing (20%) sets, repeated 100 times for robust evaluation. Four supervised algorithms (XGBoost, LightGBM, AdaBoost, and Random Forest) were trained and evaluated using AUC, sensitivity, specificity, F1-score, accuracy, and Brier score. XAI methods including SHAP (for global feature importance and model interpretability) and LIME (for individual patient-level explanations) were applied to identify key metabolomic biomarkers and ensure clinical transparency of predictions. Results: Among four ML algorithms evaluated, Random Forest demonstrated superior performance with the highest accuracy (0.87 ± 0.12), sensitivity (0.90 ± 0.18), AUC (0.92 ± 0.09), and lowest Brier score (0.20 ± 0.03), followed by LightGBM, AdaBoost, and XGBoost. The superiority of the Random Forest model was confirmed by paired t-tests, which revealed significantly higher AUC and lower Brier scores compared to all other algorithms (p < 0.05). SHAP analysis identified key metabolomic biomarkers consistently across all models, including glucose, creatine, 1-methylhistidine, homocysteine, and acetone. Low glucose and creatine levels were positively associated with steroid resistance risk, while higher propylene glycol and carnitine concentrations increased SRNS probability. LIME analysis provided patient-specific interpretability, confirming these metabolomic patterns at individual level. The XAI approach successfully identified clinically relevant metabolomic signatures for predicting steroid resistance with high accuracy and interpretability. Conclusions: The present study successfully identified candidate metabolomic biomarkers capable of predicting SRNS prior to treatment initiation and elucidating critical molecular mechanisms underlying steroid resistance regulation.

  • Front Matter
  • Cite Count Icon 17
  • 10.1016/s0168-8278(01)00303-8
Antibodies and primary biliary cirrhosis – piecing together the jigsaw
  • Dec 18, 2001
  • Journal of Hepatology
  • James Neuberger

Antibodies and primary biliary cirrhosis – piecing together the jigsaw

  • Research Article
  • Cite Count Icon 81
  • 10.1016/s1542-3565(04)00465-3
Do antinuclear antibodies in primary biliary cirrhosis patients identify increased risk for liver failure?
  • Dec 1, 2004
  • Clinical Gastroenterology and Hepatology
  • Wei-Hong Yang + 5 more

Do antinuclear antibodies in primary biliary cirrhosis patients identify increased risk for liver failure?

  • Research Article
  • Cite Count Icon 29
  • 10.1111/ajt.13828
First-Degree Living-Related Donor Liver Transplantation in Autoimmune Liver Diseases.
  • May 23, 2016
  • American Journal of Transplantation
  • A.D Aravinthan + 13 more

First-Degree Living-Related Donor Liver Transplantation in Autoimmune Liver Diseases.

  • Research Article
  • Cite Count Icon 46
  • 10.1016/j.inffus.2024.102472
Explainable AI-driven IoMT fusion: Unravelling techniques, opportunities, and challenges with Explainable AI in healthcare
  • May 16, 2024
  • Information Fusion
  • Niyaz Ahmad Wani + 4 more

Explainable AI-driven IoMT fusion: Unravelling techniques, opportunities, and challenges with Explainable AI in healthcare

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 60
  • 10.3390/su12166434
Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things
  • Aug 10, 2020
  • Sustainability
  • Corey Dunn + 2 more

With the increasing popularity of the Internet of Things (IoT) platforms, the cyber security of these platforms is a highly active area of research. One key technology underpinning smart IoT systems is machine learning, which classifies and predicts events from large-scale data in IoT networks. Machine learning is susceptible to cyber attacks, particularly data poisoning attacks that inject false data when training machine learning models. Data poisoning attacks degrade the performances of machine learning models. It is an ongoing research challenge to develop trustworthy machine learning models resilient and sustainable against data poisoning attacks in IoT networks. We studied the effects of data poisoning attacks on machine learning models, including the gradient boosting machine, random forest, naive Bayes, and feed-forward deep learning, to determine the levels to which the models should be trusted and said to be reliable in real-world IoT settings. In the training phase, a label modification function is developed to manipulate legitimate input classes. The function is employed at data poisoning rates of 5%, 10%, 20%, and 30% that allow the comparison of the poisoned models and display their performance degradations. The machine learning models have been evaluated using the ToN_IoT and UNSW NB-15 datasets, as they include a wide variety of recent legitimate and attack vectors. The experimental results revealed that the models’ performances will be degraded, in terms of accuracy and detection rates, if the number of the trained normal observations is not significantly larger than the poisoned data. At the rate of data poisoning of 30% or greater on input data, machine learning performances are significantly degraded.

  • Front Matter
  • Cite Count Icon 14
  • 10.1046/j.1440-1746.2000.02041.x
Autoimmune disease overlaps and the liver: two for the price of one?
  • Jan 1, 2000
  • Journal of Gastroenterology and Hepatology
  • Ian R Mackay

reportedon the CAH–PBC overlap syndrome, citing 11 casesthat were clinically typical, although not studied for theauto-antibodies characteristic of AIH: their cases wereseropositive for anti-M2 but an anti-M4 type of reac-tivity was not demonstrable.The report of the Interna-tional Autoimmune Hepatitis Group (IAIHG),

  • Research Article
  • 10.1142/s0219519423500707
A TWO-TIER MACHINE LEARNING FRAMEWORK FOR RISK ASSESSMENT IN DRIVERS WITH CARDIOVASCULAR DISORDERS
  • Jul 20, 2023
  • Journal of Mechanics in Medicine and Biology
  • Goutam Kumar Sahoo + 4 more

This work proposes a scheme based on a two-tier machine learning (ML) framework for the initial screening of commercial drivers with cardiovascular disorders prior to the actual driving assessment. First, the proposed framework aims to provide primary health care to cardiac drivers in resource-constrained scenarios such as bus terminals with the help of paramedical staff. The prediction of cardiovascular disease (CVD) in drivers is done using a variety of ML approaches, including Support Vector Machines (SVMs), Random Forests (RFs), Logistic Regression (LR), K-Nearest Neighbor (KNN), Decision Trees (DT), Naive Bayes (NB), and XG-Boost (XGB). The K-fold cross-validation technique also tests the model’s ability to predict CVD. Second, a no-drive alert will be provided whenever the model predicts heart disease, and a comma-separated value (CSV) file stores the predicted abnormal parameters. An email-based data communication has been set up to transfer the CSV file. A MySQL database has been created to store the abnormal data received in hospitals which will help cardiologists with the proper diagnosis. This internet of medical things (IoMT) process will enable divers to come to the hospital for medication only when advised by a cardiologist, thereby reducing the burden of routine hospital visits. The Cleveland database of the UCI ML repository, a multivariate CVD dataset that contains 14 features from 303 people, is utilized to test the performance of the proposed model. Also, the proposed model performance is evaluated using two more publicly available heart disease datasets, i.e., the MIT-BIH arrhythmia dataset and the CVD dataset. The XGB, KNN, and RF ML techniques outperform state-of-the-art methods with performance accuracies of 88.53%, 91.8%, and 93.44%, respectively, for the Cleveland database; performance accuracies of 99.20%, 98.82%, and 99.08% for the MIT-BIH arrhythmia dataset; and performance accuracies of 73.29%, 69.48%, and 71.74% for the CVD dataset. Furthermore, the results showed comparable performance to the rest of the ML techniques. Early detection of CVD and consultation with specialist doctors are essential before it reaches a seriousness that can save drivers from vehicular accidents while seeking health care.

  • Research Article
  • 10.2147/clep.s505966
Explainable Prediction of Long-Term Glycated Hemoglobin Response Change in Finnish Patients with Type 2 Diabetes Following Drug Initiation Using Evidence-Based Machine Learning Approaches.
  • Mar 1, 2025
  • Clinical epidemiology
  • Gunjan Chandra + 7 more

This study applied machine learning (ML) and explainable artificial intelligence (XAI) to predict changes in HbA1c levels, a critical biomarker for monitoring glycemic control, within 12 months of initiating a new antidiabetic drug in patients diagnosed with type 2 diabetes. It also aimed to identify the predictors associated with these changes. Electronic health records (EHR) from 10,139 type 2 diabetes patients in North Karelia, Finland, were used to train models integrating randomized controlled trial (RCT)-derived HbA1c change values as predictors, creating offset models that integrate RCT insights with real-world data. Various ML models-including linear regression (LR), multi-layer perceptron (MLP), ridge regression (RR), random forest (RF), and XGBoost (XGB)-were evaluated using R² and RMSE metrics. Baseline models used data at or before drug initiation, while follow-up models included the first post-drug HbA1c measurement, improving performance by incorporating dynamic patient data. Model performance was also compared to expected HbA1c changes from clinical trials. Results showed that ML models outperform RCT model, while LR, MLP, and RR models had comparable performance, RF and XGB models exhibited overfitting. The follow-up MLP model outperformed the baseline MLP model, with higher R² scores (0.74, 0.65) and lower RMSE values (6.94, 7.62), compared to the baseline model (R²: 0.52, 0.54; RMSE: 9.27, 9.50). Key predictors of HbA1c change included baseline and post-drug initiation HbA1c values, fasting plasma glucose, and HDL cholesterol. Using EHR and ML models allows for the development of more realistic and individualized predictions of HbA1c changes, accounting for more diverse patient populations and their heterogeneous nature, offering more tailored and effective treatment strategies for managing T2D. The use of XAI provided insights into the influence of specific predictors, enhancing model interpretability and clinical relevance. Future research will explore treatment selection models.

  • Research Article
  • Cite Count Icon 35
  • 10.1186/s13040-021-00243-0
A comparison of methods for interpreting random forest models of genetic association in the presence of non-additive interactions
  • Jan 29, 2021
  • BioData Mining
  • Alena Orlenko + 1 more

BackgroundNon-additive interactions among genes are frequently associated with a number of phenotypes, including known complex diseases such as Alzheimer’s, diabetes, and cardiovascular disease. Detecting interactions requires careful selection of analytical methods, and some machine learning algorithms are unable or underpowered to detect or model feature interactions that exhibit non-additivity. The Random Forest method is often employed in these efforts due to its ability to detect and model non-additive interactions. In addition, Random Forest has the built-in ability to estimate feature importance scores, a characteristic that allows the model to be interpreted with the order and effect size of the feature association with the outcome. This characteristic is very important for epidemiological and clinical studies where results of predictive modeling could be used to define the future direction of the research efforts. An alternative way to interpret the model is with a permutation feature importance metric which employs a permutation approach to calculate a feature contribution coefficient in units of the decrease in the model’s performance and with the Shapely additive explanations which employ cooperative game theory approach. Currently, it is unclear which Random Forest feature importance metric provides a superior estimation of the true informative contribution of features in genetic association analysis.ResultsTo address this issue, and to improve interpretability of Random Forest predictions, we compared different methods for feature importance estimation in real and simulated datasets with non-additive interactions. As a result, we detected a discrepancy between the metrics for the real-world datasets and further established that the permutation feature importance metric provides more precise feature importance rank estimation for the simulated datasets with non-additive interactions.ConclusionsBy analyzing both real and simulated data, we established that the permutation feature importance metric provides more precise feature importance rank estimation in the presence of non-additive interactions.

More from: Computer methods and programs in biomedicine
  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109027
SEMI-PLC: A framework for semi-supervised medical images segmentation with pseudo label correction.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Shiyuan Huang + 4 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.108997
Discovery of novel microtubule destabilizing agents via virtual screening methods and antitumor evaluation.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Sheng Zheng + 8 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109009
3D-1D modelling of cranial mesh heating induced by low or medium frequency magnetic fields.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Alessandro Arduino + 6 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.108992
An EIT-based assessment of regional ventilation delay under incremental PEEP: Influence of sex, smoking, vaping, asthma, and BMI.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Rongqing Chen + 5 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.108963
Interpretable epidemic state estimation via rule based modeling.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Giulio Pisaneschi + 3 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109022
Novel fusion architecture of multi-location blood flow sounds for arteriovenous fistula stenosis diagnosis.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Haowei Liu + 11 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.108986
Combining tumor habitat radiomics and circulating tumor cell data for predicting high-grade pathological components in lung adenocarcinoma.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Hongchang Wang + 11 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.108974
DG-MSGAT: A Biologically-informed Differential Gene Multi-Scale Graph Attention Network for predicting neoadjuvant therapy response in rectal cancer.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Xu Luo + 6 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109015
Personalized federated learning with hierarchical reweighting for multi-center clinical prediction.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Xuebing Yang + 4 more

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.108993
Reducing the spring-back force of the aortic stent graft - decreasing the risk of stent graft-induced new entry.
  • Nov 1, 2025
  • Computer methods and programs in biomedicine
  • Meixuan Li + 6 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon