Explainable artificial intelligence (XAI) for predicting the need for intubation in methanol-poisoned patients: a study comparing deep and machine learning models
This study evaluates explainable AI models for predicting intubation in methanol-poisoned patients, finding that machine learning models, especially Random Forest and XGBoost, outperform deep learning models across multiple metrics, with accuracies up to 97% and sensitivities around 99%, highlighting their potential for clinical decision support.
The need for intubation in methanol-poisoned patients, if not predicted in time, can lead to irreparable complications and even death. Artificial intelligence (AI) techniques like machine learning (ML) and deep learning (DL) greatly aid in accurately predicting intubation needs for methanol-poisoned patients. So, our study aims to assess Explainable Artificial Intelligence (XAI) for predicting intubation necessity in methanol-poisoned patients, comparing deep learning and machine learning models. This study analyzed a dataset of 897 patient records from Loghman Hakim Hospital in Tehran, Iran, encompassing cases of methanol poisoning, including those requiring intubation (202 cases) and those not requiring it (695 cases). Eight established ML (SVM, XGB, DT, RF) and DL (DNN, FNN, LSTM, CNN) models were used. Techniques such as tenfold cross-validation and hyperparameter tuning were applied to prevent overfitting. The study also focused on interpretability through SHAP and LIME methods. Model performance was evaluated based on accuracy, specificity, sensitivity, F1-score, and ROC curve metrics. Among DL models, LSTM showed superior performance in accuracy (94.0%), sensitivity (99.0%), specificity (94.0%), and F1-score (97.0%). CNN led in ROC with 78.0%. For ML models, RF excelled in accuracy (97.0%) and specificity (100%), followed by XGB with sensitivity (99.37%), F1-score (98.27%), and ROC (96.08%). Overall, RF and XGB outperformed other models, with accuracy (97.0%) and specificity (100%) for RF, and sensitivity (99.37%), F1-score (98.27%), and ROC (96.08%) for XGB. ML models surpassed DL models across all metrics, with accuracies from 93.0% to 97.0% for DL and 93.0% to 99.0% for ML. Sensitivities ranged from 98.0% to 99.37% for DL and 93.0% to 99.0% for ML. DL models achieved specificities from 78.0% to 94.0%, while ML models ranged from 93.0% to 100%. F1-scores for DL were between 93.0% and 97.0%, and for ML between 96.0% and 98.27%. DL models scored ROC between 68.0% and 78.0%, while ML models ranged from 84.0% to 96.08%. Key features for predicting intubation necessity include GCS at admission, ICU admission, age, longer folic acid therapy duration, elevated BUN and AST levels, VBG_HCO3 at initial record, and hemodialysis presence. This study as the showcases XAI's effectiveness in predicting intubation necessity in methanol-poisoned patients. ML models, particularly RF and XGB, outperform DL counterparts, underscoring their potential for clinical decision-making.
- Research Article
16
- 10.1038/s41598-024-82931-5
- Dec 28, 2024
- Scientific Reports
Failure to predict stroke promptly may lead to delayed treatment, causing severe consequences like permanent neurological damage or death. Early detection using deep learning (DL) and machine learning (ML) models can enhance patient outcomes and mitigate the long-term effects of strokes. The aim of this study is to compare these models, exploring their efficacy in predicting stroke. This study analyzed a dataset comprising 663 records from patients hospitalized at Hazrat Rasool Akram Hospital in Tehran, Iran, including 401 healthy individuals and 262 stroke patients. A total of eight established ML (SVM, XGB, KNN, RF) and DL (DNN, FNN, LSTM, CNN) models were utilized to predict stroke. Techniques such as 10-fold cross-validation and hyperparameter tuning were implemented to prevent overfitting. The study also focused on interpretability through Shapley Additive Explanations (SHAP). The evaluation of model’s performance was based on accuracy, specificity, sensitivity, F1-score, and ROC curve metrics. Among DL models, LSTM showed superior sensitivity at 96.15%, while FNN exhibited better specificity (96.0%), accuracy (96.0%), F1-score (95.0%), and ROC (98.0%) among DL models. For ML models, RF displayed higher sensitivity (99.9%), accuracy (99.0%), specificity (100%), F1-score (99.0%), and ROC (99.9%). Overall, RF outperformed all models, while DL models surpassed ML models in most metrics except for RF. DL models (CNN, LSTM, DNN, FNN) achieved sensitivities from 93.0 to 96.15%, specificities from 80.0 to 96.0%, accuracies from 92.0 to 96.0%, F1-scores from 87.34 to 95.0%, and ROC scores from 95.0 to 98.0%. In contrast, ML models (KNN, XGB, SVM) showed sensitivities between 29.0% and 94.0%, specificities between 89.47% and 96.0%, accuracies between 71.0% and 95.0%, F1-scores between 44.0% and 95.0%, and ROC scores between 64.0% and 95.0%. This study demonstrates the efficacy of DL and ML models in predicting stroke, with the RF models outperforming all others in key metrics. While DL models generally surpassed ML models, RF’s exceptional performance highlights the potential of combining these technologies for early stroke detection, significantly improving patient outcomes by preventing severe consequences like permanent neurological damage or death.
- Research Article
13
- 10.1371/journal.pone.0282608
- Mar 9, 2023
- PLOS ONE
COVID-19 is highly infectious and causes acute respiratory disease. Machine learning (ML) and deep learning (DL) models are vital in detecting disease from computerized chest tomography (CT) scans. The DL models outperformed the ML models. For COVID-19 detection from CT scan images, DL models are used as end-to-end models. Thus, the performance of the model is evaluated for the quality of the extracted feature and classification accuracy. There are four contributions included in this work. First, this research is motivated by studying the quality of the extracted feature from the DL by feeding these extracted to an ML model. In other words, we proposed comparing the end-to-end DL model performance against the approach of using DL for feature extraction and ML for the classification of COVID-19 CT scan images. Second, we proposed studying the effect of fusing extracted features from image descriptors, e.g., Scale-Invariant Feature Transform (SIFT), with extracted features from DL models. Third, we proposed a new Convolutional Neural Network (CNN) to be trained from scratch and then compared to the deep transfer learning on the same classification problem. Finally, we studied the performance gap between classic ML models against ensemble learning models. The proposed framework is evaluated using a CT dataset, where the obtained results are evaluated using five different metrics The obtained results revealed that using the proposed CNN model is better than using the well-known DL model for the purpose of feature extraction. Moreover, using a DL model for feature extraction and an ML model for the classification task achieved better results in comparison to using an end-to-end DL model for detecting COVID-19 CT scan images. Of note, the accuracy rate of the former method improved by using ensemble learning models instead of the classic ML models. The proposed method achieved the best accuracy rate of 99.39%.
- Research Article
23
- 10.1007/s11356-021-13503-7
- Mar 23, 2021
- Environmental Science and Pollution Research
Understanding the spatial distribution of soil salinity is required to conserve land against degradation and desertification. Against this background, this study is the first attempt to predict soil salinity in the Jaghin basin, in southern Iran, by applying and comparing the performance of four deep learning (DL) models (deep convolutional neural networks-DCNNs, dense connected deep neural networks-DenseDNNs, recurrent neural networks-long short-term memory-RNN-LSTM and recurrent neural networks-gated recurrent unit-RNN-GRU) and six shallow machine learning (ML) models (bagged classification and regression tree-BCART, cforest, cubist, quantile regression with LASSO penalty-QR-LASSO, ridge regression-RR and support vectore machine-SVM). To do this, 49 environmental landsat8-derived variables including digital elevation model (DEM)-extracted covariates, soil-salinity indices, and other variables (e.g., soil order, lithology, land use) were mapped spatially. For assessing the relationships between soil salinity (EC) and factors controlling EC, we collected 319 surficial (0-5 cm depth) soil samples for measuring soil salinity on the basis of electrical conductivity (EC). We then selected the most important features (covariates) controlling soil salinity by applying a MARS model. The performance of the DL and shallow ML models for generating soil salinity spatial maps (SSSMs) was assessed using a Taylor diagram and the Nash Sutcliff coefficient (NSE). Among all 10 predictive models, DL models with NSE ≥ 0.9 (DCNNs was the most accurate model with NSE = 0.96) were selected as the four best models, and performed better than the six shallow ML models with NSE ≤ 0.83 (QR-LASSO was the weakest predictive model with NSE = 0.50). Based on DCNNs-, the values of the EC ranged between 0.67 and 14.73 dS/m, whereas for QR-LASSO the corresponding EC values were 0.37 to 19.6 dS/m. Overall, DL models performed better than shallow ML models for production of the SSSMs and therefore we recommend applying DL models for prediction purposes in environmental sciences.
- Research Article
16
- 10.1007/s11356-024-35764-8
- Jan 1, 2025
- Environmental Science and Pollution Research
Human-induced global warming, primarily attributed to the rise in atmospheric CO2, poses a substantial risk to the survival of humanity. While most research focuses on predicting annual CO2 emissions, which are crucial for setting long-term emission mitigation targets, the precise prediction of daily CO2 emissions is equally vital for setting short-term targets. This study examines the performance of 14 models in predicting daily CO2 emissions data from 1/1/2022 to 30/9/2023 across the top four polluting regions (China, India, the USA, and the EU27&UK). The 14 models used in the study include four statistical models (ARMA, ARIMA, SARMA, and SARIMA), three machine learning models (support vector machine (SVM), random forest (RF), and gradient boosting (GB)), and seven deep learning models (artificial neural network (ANN), recurrent neural network variations such as gated recurrent unit (GRU), long short-term memory (LSTM), bidirectional-LSTM (BILSTM), and three hybrid combinations of CNN-RNN). Performance evaluation employs four metrics (R2, MAE, RMSE, and MAPE). The results show that the machine learning (ML) and deep learning (DL) models, with higher R2 (0.714–0.932) and lower RMSE (0.480–0.247) values, respectively, outperformed the statistical model, which had R2 (− 0.060–0.719) and RMSE (1.695–0.537) values, in predicting daily CO2 emissions across all four regions. The performance of the ML and DL models was further enhanced by differencing, a technique that improves accuracy by ensuring stationarity and creating additional features and patterns from which the model can learn. Additionally, applying ensemble techniques such as bagging and voting improved the performance of the ML models by approximately 9.6%, whereas hybrid combinations of CNN-RNN enhanced the performance of the RNN models. In summary, the performance of both the ML and DL models was relatively similar. However, due to the high computational requirements associated with DL models, the recommended models for daily CO2 emission prediction are ML models using the ensemble technique of voting and bagging. This model can assist in accurately forecasting daily emissions, aiding authorities in setting targets for CO2 emission reduction.
- Research Article
16
- 10.1007/s10916-024-02087-7
- Jan 1, 2024
- Journal of Medical Systems
Artificial intelligence (AI) based predictive models for early detection of cardiovascular disease (CVD) risk are increasingly being utilised. However, AI based risk prediction models that account for right-censored data have been overlooked. This systematic review (PROSPERO protocol CRD42023492655) includes 33 studies that utilised machine learning (ML) and deep learning (DL) models for survival outcome in CVD prediction. We provided details on the employed ML and DL models, eXplainable AI (XAI) techniques, and type of included variables, with a focus on social determinants of health (SDoH) and gender-stratification. Approximately half of the studies were published in 2023 with the majority from the United States. Random Survival Forest (RSF), Survival Gradient Boosting models, and Penalised Cox models were the most frequently employed ML models. DeepSurv was the most frequently employed DL model. DL models were better at predicting CVD outcomes than ML models. Permutation-based feature importance and Shapley values were the most utilised XAI methods for explaining AI models. Moreover, only one in five studies performed gender-stratification analysis and very few incorporate the wide range of SDoH factors in their prediction model. In conclusion, the evidence indicates that RSF and DeepSurv models are currently the optimal models for predicting CVD outcomes. This study also highlights the better predictive ability of DL survival models, compared to ML models. Future research should ensure the appropriate interpretation of AI models, accounting for SDoH, and gender stratification, as gender plays a significant role in CVD occurrence.
- Research Article
- 10.25147/ijcsr.2017.001.1.224
- Jan 1, 2025
- International Journal of Computing Sciences Research
Purpose–This paper presents a comprehensive empirical analysis focusing on sentiment flux within state-of-the-art models designed for handling polarity shifts due to implicit negation in Amazon mobile phones' reviews. Method–The research evaluates diverse models across five categories: traditional machine learning (ML), deep learning (DL), and hybrid models combining both approaches. Various feature extraction, feature selection, and data augmentation techniques are tested on Amazon mobile phone reviews dataset. BERT and LSTM are used for deep learning while SVM and Naive Bayes are used for traditional ML. ANOVA is used to identify the presence or absence of significant differences and interactions among these entities. Results –DL shows superior performance compared to traditional ML models. ANOVA analysis shows significant performance differences between conventional ML and DL models. Traditional ML models interact significantly with feature extraction and selection techniques while DL models do not. Traditional ML models do not interact significantly with data augmentation methods while DL models do. FastText extraction outperforms word2vec; Back translation outperforms synonym replacement while recursive feature selection (RFE) surpasses TF-IDF (Term Frequency-Inverse Document Frequency). The BERT and LSTM exhibit one of the strongest performances. Conclusion –The study concludes that DL models are more effective. Data augmentation techniques significantly impact the performance of DL models, with back translation showing superior performance over synonym replacement. This provides a leverage point in developing an improved model in the future. Recommendations –Future research should focus on developing a hybrid model for Enhanced Polarity Shift Management of Mobile Phone Reviews using Contextual Back Translation Augmented by Seq2seq Perturbations. This aims at leveraging contextual back translation and Seq2seq perturbations to generate a diverse interpretation that consequently improves the model's ability to handle nuanced expressions of sentiments due to implicit negation with enhanced accuracy, generalizability, robustness to polarity shifts, and contextual understanding. Research Implications –The findings provide valuable insights into the development of state-of-the-art models, offering a promising direction for further research in sentiment analysis. Keywords –empirical analysis, hybrid, perturbations, implicit negation, sentiment flux
- Research Article
6
- 10.3390/f15050839
- May 10, 2024
- Forests
Satellite remote sensing plays a significant role in the detection of smoke from forest fires. However, existing methods for detecting smoke from forest fires based on remote sensing images rely solely on the information provided by the images, overlooking the positional information and brightness temperature of the fire spots in forest fires. This oversight significantly increases the probability of misjudging smoke plumes. This paper proposes a smoke detection model, Forest Smoke-Fire Net (FSF Net), which integrates wildfire smoke images with the dynamic brightness temperature information of the region. The MODIS_Smoke_FPT dataset was constructed using a Moderate Resolution Imaging Spectroradiometer (MODIS), the meteorological information at the site of the fire, and elevation data to determine the location of smoke and the brightness temperature threshold for wildfires. Deep learning and machine learning models were trained separately using the image data and fire spot area data provided by the dataset. The performance of the deep learning model was evaluated using metric MAP, while the regression performance of machine learning was assessed with Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The selected machine learning and deep learning models were organically integrated. The results show that the Mask_RCNN_ResNet50_FPN and XGR models performed best among the deep learning and machine learning models, respectively. Combining the two models achieved good smoke detection results (Precisionsmoke=89.12%). Compared with wildfire smoke detection models that solely use image recognition, the model proposed in this paper demonstrates stronger applicability in improving the precision of smoke detection, thereby providing beneficial support for the timely detection of forest fires and applications of remote sensing.
- Research Article
1
- 10.1007/s00586-025-08668-5
- Feb 8, 2025
- European spine journal : official publication of the European Spine Society, the European Spinal Deformity Society, and the European Section of the Cervical Spine Research Society
For cases of multilevel lumbar disc herniation (LDH), selecting the surgical approach for Percutaneous Transforaminal Endoscopic Discectomy (PTED) presents significant challenges and heavily relies on the physician's judgment. This study aims to develop a deep learning (DL)-based multimodal model that provides objective and referenceable support by comprehensively analyzing imaging and clinical data to assist physicians. This retrospective study collected imaging and clinical data from patients with multilevel LDH. Each segmental MR scan was concurrently fed into a multi-input ResNet 50 model to predict the target segment. The target segment scan was then input to a custom model to predict the PTED approach direction. Clinical data, including the patient's lower limb sensory and motor functions, were used as feature variables in a machine learning (ML) model for prediction. Bayesian optimization was employed to determine the optimal weights for the fusion of the two models. The predictive performance of the multimodal model significantly outperformed the DL and ML models. For PTED target segment prediction, the multimodal model achieved an accuracy of 93.8%, while the DL and ML models achieved accuracies of 87.7% and 87.0%, respectively. Regarding the PTED approach direction, the multimodal model had an accuracy of 89.3%, significantly higher than the DL model's 87.8% and the ML model's 87.6%. The multimodal model demonstrated excellent performance in predicting PTED target segments and approach directions. Its predictive performance surpassed that of the individual DL and ML models.
- Research Article
34
- 10.1016/j.resourpol.2023.104216
- Oct 1, 2023
- Resources Policy
A novel deep-learning technique for forecasting oil price volatility using historical prices of five precious metals in context of green financing – A comparison of deep learning, machine learning, and statistical models
- Research Article
- 10.1186/s12885-025-15121-9
- Oct 29, 2025
- BMC Cancer
BackgroundOccult pleural dissemination (PD) in non-small cell lung cancer (NSCLC) patients is likely to be missed on computed tomography (CT) scans, associated with poor survival, and generally contraindicated for radical surgery. This study aimed to develop and compare the performance of radiomics-based machine learning (ML), deep learning (DL), and fusion models to preoperatively identify occult PD in NSCLC patients.Materials and methodsA total of 326 NSCLC patients from three Chinese high-volume medical centers (2016–2023) were retrospectively collected and divided into training (n = 216), internal test (n = 54), and external test (n = 56) cohorts. Ten radiomics-based ML models and eight DL models were trained using CT images at the maximum cross-sectional slice of the primary tumor. Moreover, another two fusion models (prefusion and postfusion) were developed using feature-based and decision-based methods. The receiver operating characteristic curve (ROC) and area under the curve (AUC) were mainly used to compare the predictive performance of the models.ResultsThe GBM (AUC: 0.821) and DenseNet121 (AUC: 0.764) models achieved the highest AUC among ML and DL models in the external test cohorts, respectively. The postfusion model, integrating the output probabilities from GBM and DenseNet121 models, showed superior performance (AUC: 0.828–0.978) compared to the prefusion model (AUC: 0.817–0.877). Moreover, the postfusion model demonstrated the highest degree of sensitivity (82.1–97.2%) among all models across the three cohorts.ConclusionsThe postfusion model, which integrates radiomics-based ML and DL models, can serve as a sensitive diagnostic tool to predict occult PD in NSCLC patients, thereby helping to avoid unnecessary surgeries.Supplementary InformationThe online version contains supplementary material available at 10.1186/s12885-025-15121-9.
- Research Article
5
- 10.1016/j.ijmedinf.2025.105812
- Apr 1, 2025
- International journal of medical informatics
Deep learning and machine learning in CT-based COPD diagnosis: Systematic review and meta-analysis.
- Research Article
11
- 10.1111/exsy.13153
- Oct 5, 2022
- Expert Systems
The increase in the number of undesired SMS termed smishing message and the data imbalance problem has generated a great demand for the development of more reliable anti‐spam filters. State of the art machine learning approaches are being employed to recognize and separate spam messages. Most recent studies target message classification by using numerous properties and features of the words but fail to consider the circumstantial features like long‐range dependencies between the words that are extremely important in identifying smishing messages. The idea is to develop an intelligent model that will distinguish between smishing messages and ham messages, by adopting a combined approach of regular expression (Regex), machine learning (ML) and deep learning (DL) models. Regex rules are generated using the dataset's spam messages for the purpose of refining the dataset. Support vector machine (SVM), Multinomial Naive Bayes and Random Forest are included under machine learning models and long short‐term memory (LSTM), bidirectional long short‐term memory (Bi‐LSTM), stacked LSTM and stacked Bi‐LSTM are included under deep learning models. The comparison between machine learning models and deep learning models is also carried out based on the performance evaluation parameters namely accuracy, precision, recall and F1 score of the models. It is observed that deep learning models perform better than machine learning models and the introduction of a regular expression to the dataset increases the efficiency of both the deep learning models and machine learning models.
- Preprint Article
- 10.5194/ems2025-562
- Jul 16, 2025
Machine learning (ML) and deep learning (DL) models can play an important role when it comes to modelling complicated processes. Such capability is necessary for hydrological and climate-related applications. Generally, ML models utilize precipitation and temperature time series of a basin as input to develop a lumped rainfall-runoff model to simulate streamflow at the basin outlet. However, when it is divided into several sub-basins, Graph Neural Networks (GNN) can consider each sub-basin as a node and link them together using a connectivity matrix to account for spatial variations of hydroclimatic variables. In this study, GNN and various ML models with different types of architecture, ranging from neural networks, tree-based structure, and gradient boosting, were exploited for daily streamflow simulation over different case studies. For each case study, the basin was divided into a few sub-basins for which daily precipitation and temperature data were aggregated and used as input. For training GNN, the connection matrix of sub-basins was also used as input. Basically, 75% of historical records were utilized to train GNN and different ML models, e.g., artificial neural networks, support vector machine, decision tree, random forest, eXtreme Gradient Boosting (XGBoost), Light Gradient-Boosting Machine (LightGBM), and Category Boosting (CatBoost), while the rest was used for testing. Streamflow simulation was conducted with/without considering seasonality impact and lag times. The obtained results clearly demonstrate that considering seasonality and time lags can enhance accuracy of streamflow predictions based on Kling–Gupta efficiency (KGE). Furthermore, GNN with seasonality impact and time lags achieved promising results across different case studies with KGE>0.85 for training and KGE>0.59 for testing data, respectively. Among ML models, boosting models, e.g., LightGBM and XGBoost, performed slightly better than other ML models. for Finally, this comparative analysis provides valuable insights for ML/DL applications in climate change impact assessments.Acknowledgements: This research work was carried out as part of the TRANSCEND project with funding received from the European Union Horizon Europe Research and Innovation Programme under Grant Agreement No. 10108411.
- Research Article
30
- 10.1371/journal.pone.0317619
- Jan 23, 2025
- PloS one
This study presents a comprehensive comparative analysis of Machine Learning (ML) and Deep Learning (DL) models for predicting Wind Turbine (WT) power output based on environmental variables such as temperature, humidity, wind speed, and wind direction. Along with Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), and Convolutional Neural Network (CNN), the following ML models were looked at: Linear Regression (LR), Support Vector Regressor (SVR), Random Forest (RF), Extra Trees (ET), Adaptive Boosting (AdaBoost), Categorical Boosting (CatBoost), Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM). Using a dataset of 40,000 observations, the models were assessed based on R-squared, Mean Absolute Error (MAE), and Root Mean Square Error (RMSE). ET achieved the highest performance among ML models, with an R-squared value of 0.7231 and a RMSE of 0.1512. Among DL models, ANN demonstrated the best performance, achieving an R-squared value of 0.7248 and a RMSE of 0.1516. The results show that DL models, especially ANN, did slightly better than the best ML models. This means that they are better at modeling non-linear dependencies in multivariate data. Preprocessing techniques, including feature scaling and parameter tuning, improved model performance by enhancing data consistency and optimizing hyperparameters. When compared to previous benchmarks, the performance of both ANN and ET demonstrates significant predictive accuracy gains in WT power output forecasting. This study's novelty lies in directly comparing a diverse range of ML and DL algorithms while highlighting the potential of advanced computational approaches for renewable energy optimization.
- Research Article
10
- 10.1038/s41598-025-99167-6
- Apr 25, 2025
- Scientific Reports
Social media platforms provide valuable insights into mental health trends by capturing user-generated discussions on conditions such as depression, anxiety, and suicidal ideation. Machine learning (ML) and deep learning (DL) models have been increasingly applied to classify mental health conditions from textual data, but selecting the most effective model involves trade-offs in accuracy, interpretability, and computational efficiency. This study evaluates multiple ML models, including logistic regression, random forest, and LightGBM, alongside DL architectures such as ALBERT and Gated Recurrent Units (GRUs), for both binary and multi-class classification of mental health conditions. Our findings indicate that ML and DL models achieve comparable classification performance on medium-sized datasets, with ML models offering greater interpretability through variable importance scores, while DL models are more robust to complex linguistic patterns. Additionally, ML models require explicit feature engineering, whereas DL models learn hierarchical representations directly from text. Logistic regression provides the advantage of capturing both positive and negative associations between features and mental health conditions, whereas tree-based models prioritize decision-making power through split-based feature selection. This study offers empirical insights into the advantages and limitations of different modeling approaches and provides recommendations for selecting appropriate methods based on dataset size, interpretability needs, and computational constraints.
- Ask R Discovery
- Chat PDF