Artificial intelligence‐based algorithms for the diagnosis of retinopathy of prematurity
ObjectivesThis is a protocol for a Cochrane Review (diagnostic). The objectives are as follows:To assess the diagnostic performance of AI‐based algorithms in comparison to the established reference standard of clinical diagnosis labels for ROP.Secondary objectivesTo undertake a critical examination of potential sources of heterogeneity influencing model performance by conducting subgroup analysis by:demographics of the study population (race or ethnicity);type of input data (images only, clinical variables only, or a combination of both);the fundamental methodologies adopted by the AI algorithms (CNN or DNN); andscanner type (Retcam, Forus, or Pheonix ICON).
372
- 10.1038/s41586-023-06555-x
- Sep 13, 2023
- Nature
25
- 10.4103/ojo.ojo_234_2014
- Jan 1, 2017
- Oman Journal of Ophthalmology
39
- 10.1016/j.jaapos.2020.01.014
- Apr 11, 2020
- Journal of American Association for Pediatric Ophthalmology and Strabismus
3
- 10.1002/14651858.cd015522.pub2
- Oct 17, 2024
- The Cochrane database of systematic reviews
102
- 10.1038/s41746-020-0255-1
- Mar 26, 2020
- npj Digital Medicine
118
- 10.1016/j.survophthal.2009.02.020
- Aug 8, 2009
- Survey of ophthalmology
3
- 10.1167/tvst.12.5.26
- May 24, 2023
- Translational Vision Science & Technology
485
- 10.1038/s41591-021-01595-0
- Dec 1, 2021
- Nature Medicine
46
- 10.1016/j.ophtha.2022.02.008
- Feb 12, 2022
- Ophthalmology
535
- 10.1001/jamaophthalmol.2018.1934
- May 2, 2018
- JAMA Ophthalmology
- Research Article
- 10.2196/67529
- Sep 9, 2025
- JMIR Medical Informatics
BackgroundArtificial intelligence (AI) algorithms offer an effective solution to alleviate the burden of diabetic retinopathy (DR) screening in public health settings. However, there are challenges in translating diagnostic performance and its application when deployed in real-world conditions.ObjectiveThis study aimed to assess the technical feasibility of integration and diagnostic performance of validated DR screening (DRS) AI algorithms in real-world outpatient public health settings.MethodsPrior to integrating an AI algorithm for DR screening, the study involved several steps: (1) Five AI companies, including four from India and one international company, were invited to evaluate their diagnostic performance using low-cost nonmydriatic fundus cameras in public health settings; (2) The AI algorithms were prospectively validated on fundus images from 250 people with diabetes mellitus, captured by a trained optometrist in public health settings in Chandigarh Tricity in North India. The performance evaluation used diagnostic metrics, including sensitivity, specificity, and accuracy, compared to human grader assessments; (3) The AI algorithm with better diagnostic performance was integrated into a low-cost screening camera deployed at a community health center (CHC) in the Moga district of Punjab, India. For AI algorithm analysis, a trained health system optometrist captured nonmydriatic images of 343 patients.ResultsThree web-based AI screening companies agreed to participate, while one declined and one chose to withdraw due to low specificity identified during the interim analysis. The three AI algorithms demonstrated variable diagnostic performance, with sensitivity (60%-80%) and specificity (14%-96%). Upon integration, the better-performing algorithm AI-3 (sensitivity: 68%, specificity: 96, and accuracy: 88·43%) demonstrated high sensitivity of image gradability (99.5%), DR detection (99.6%), and referral DR (79%) at the CHC.ConclusionsThis study highlights the importance of systematic AI validation for responsible clinical integration, demonstrating the potential of DRS to improve health care access in resource-limited public health settings.
- Research Article
- 10.1007/s10140-025-02353-2
- Jun 9, 2025
- Emergency radiology
Missed fractures are the primary cause of interpretation errors in emergency radiology, and artificial intelligence has recently shown great promise in radiograph interpretation. This study compared the diagnostic performance of two AI algorithms, BoneView and RBfracture, in detecting traumatic abnormalities (fractures and dislocations) in MSK radiographs. AI algorithms analyzed 998 radiographs (585 normal, 413 abnormal), against the consensus of two MSK specialists. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and interobserver agreement (Cohen's Kappa) were calculated. 95% confidence intervals (CI) assessed robustness, and McNemar's tests compared sensitivity and specificity between the AI algorithms. BoneView demonstrated a sensitivity of 0.893 (95% CI: 0.860-0.920), specificity of 0.885 (95% CI: 0.857-0.909), PPV of 0.846, NPV of 0.922, and accuracy of 0.889. RBfracture demonstrated a sensitivity of 0.872 (95% CI: 0.836-0.901), specificity of 0.892 (95% CI: 0.865-0.915), PPV of 0.851, NPV of 0.908, and accuracy of 0.884. No statistically significant differences were found in sensitivity (p = 0.151) or specificity (p = 0.708). Kappa was 0.81 (95% CI: 0.77-0.84), indicating almost perfect agreement between the two AI algorithms. Performance was similar in adults and children. Both AI algorithms struggled more with subtle abnormalities, which constituted 66% and 70% of false negatives but only 20% and 18% of true positives for the two AI algorithms, respectively (p < 0.001). BoneView and RBfracture exhibited high diagnostic performance and almost perfect agreement, with consistent results across adults and children, highlighting the potential of AI in emergency radiograph interpretation.
- Research Article
37
- 10.3390/w13111547
- May 31, 2021
- Water
Accurate real-time water quality prediction is of great significance for local environmental managers to deal with upcoming events and emergencies to develop best management practices. In this study, the performances in real-time water quality forecasting based on different deep learning (DL) models with different input data pre-processing methods were compared. There were three popular DL models concerned, including the convolutional neural network (CNN), long short-term memory neural network (LSTM), and hybrid CNN–LSTM. Two types of input data were applied, including the original one-dimensional time series and the two-dimensional grey image based on the complete ensemble empirical mode decomposition algorithm with adaptive noise (CEEMDAN) decomposition. Each type of input data was used in each DL model to forecast the real-time monitoring water quality parameters of dissolved oxygen (DO) and total nitrogen (TN). The results showed that (1) the performances of CNN–LSTM were superior to the standalone model CNN and LSTM; (2) the models used CEEMDAN-based input data performed much better than the models used the original input data, while the improvements for non-periodic parameter TN were much greater than that for periodic parameter DO; and (3) the model accuracies gradually decreased with the increase of prediction steps, while the original input data decayed faster than the CEEMDAN-based input data and the non-periodic parameter TN decayed faster than the periodic parameter DO. Overall, the input data preprocessed by the CEEMDAN method could effectively improve the forecasting performances of deep learning models, and this improvement was especially significant for non-periodic parameters of TN.
- Research Article
2
- 10.1007/s11255-023-03722-x
- Jan 1, 2023
- International Urology and Nephrology
PurposeTo evaluate the feasibility of using mpMRI image features predicted by AI algorithms in the prediction of clinically significant prostate cancer (csPCa).Materials and methodsThis study analyzed patients who underwent prostate mpMRI and radical prostatectomy (RP) at the Affiliated Hospital of Jiaxing University between November 2017 and December 2022. The clinical data collected included age, serum prostate-specific antigen (PSA), and biopsy pathology. The reference standard was the prostatectomy pathology, and a Gleason Score (GS) of 3 + 3 = 6 was considered non-clinically significant prostate cancer (non-csPCa), while a GS ≥ 3 + 4 was considered csPCa. A pre-trained AI algorithm was used to extract the lesion on mpMRI, and the image features of the lesion and the prostate gland were analyzed. Two logistic regression models were developed to predict csPCa: an MR model and a combined model. The MR model used age, PSA, PSA density (PSAD), and the AI-predicted MR image features as predictor variables. The combined model used biopsy pathology and the aforementioned variables as predictor variables. The model’s effectiveness was evaluated by comparing it to biopsy pathology using the area under the curve (AUC) of receiver operation characteristic (ROC) analysis.ResultsA total of 315 eligible patients were enrolled with an average age of 70.8 ± 5.9. Based on RP pathology, 18 had non-csPCa, and 297 had csPCa. PSA, PSAD, biopsy pathology, and ADC value of the prostate outside the lesion (ADCprostate) varied significantly across different ISUP grade groups of RP pathology (P < 0.001). Other clinical variables and image features did not vary significantly across different ISUP grade groups (P > 0.05). The MR model included PSAD, the ratio of ADC value between the lesion and the prostate outside the lesion (ADClesion/prostate), the signal intensity ratio of DWI between the lesion and the prostate outside the lesion (DWIlesion/prostate), and the ratio of DWIlesion/prostate to ADClesion/prostate. The combined model included biopsy pathology, ADClesion/prostate, mean signal intensity of the lesion on DWI (DWIlesion), DWI signal intensity of the prostate outside the lesion (DWIprostate), and signal intensity ratio of DWI between the lesion and the prostate outside the lesion (DWIlesion/prostate). The AUC of the MR model (0.830, 95% CI 0.743, 0.916) was not significantly different from that of biopsy pathology (0.820, 95% CI 0.728, 0.912, P = 0.884). The AUC of the combined model (0.915, 95% CI 0.849, 0.980) was higher than that of the biopsy pathology (P = 0.042) and MR model (P = 0.031).ConclusionThe aggressiveness of prostate cancer can be effectively predicted using AI-extracted image features from mpMRI images, similar to biopsy pathology. The prediction accuracy was improved by combining the AI-extracted mpMRI image features with biopsy pathology, surpassing the performance of biopsy pathology alone.
- Supplementary Content
- 10.3390/ani15172481
- Aug 23, 2025
- Animals : an Open Access Journal from MDPI
Predictive models use historical data to predict a future event and can be applied to a wide variety of tasks. A broader evaluation of the cattle literature is required to better understand predictive model performance across various health challenges and to understand data types utilized to train models. This narrative review aims to describe predictive model performance in greater detail across various disease outcomes, input data types, and algorithms with a specific focus on accuracy, sensitivity, specificity, and positive and negative predictive values. A secondary goal is to address important areas for consideration for future work in the beef cattle sector. In total, 19 articles were included. Broad categories of disease were covered, including respiratory disease, bovine tuberculosis, and others. Various input data types were reported, including demographic data, images, and laboratory test results, among others. Several algorithms were utilized, including neural networks, linear models, and others. Accuracy, sensitivity, and specificity values ranged widely across disease outcome and algorithm categories. Negative predictive values were greater than positive predictive values for most disease outcomes. This review highlights the importance of utilizing several performance metrics and concludes that future work should address prevalence of outcomes and class-imbalanced data.
- Research Article
6
- 10.1007/s00330-021-08265-2
- Nov 11, 2021
- European Radiology
To establish and validate a predictive model integrating with clinical and dual-energy CT (DECT) variables for individual recurrence-free survival (RFS) prediction in early-stage glottic laryngeal cancer (EGLC) after larynx-preserving surgery. This retrospective study included 212 consecutive patients with EGLC who underwent DECT before larynx-preserving surgery between January 2015 and December 2018. Using Cox proportional hazard regression model to determine independent predictors for RFS and presented on a nomogram. The model's performance was assessed using Harrell's concordance index (C-index), time-dependent area under curve (TD-AUC) plot, and calibration curve. A risk stratification system was established using the nomogram with median scores of all cases to divide all patients into two prognostic groups. Recurrence occurred in 39/212 (18.4%) cases. Normalized iodine concentration in arterial (NICAP) and venous phases (NICVP) were verified as significant predictors of RFS in multivariate Cox regression (hazard ratio [HR], 4.2; 95% confidence interval [CI]: 2.3, 7.7, p < .001 and HR, 3.0; 95% CI: 1.5, 5.9, p = .002, respectively). Nomogram based on clinical and DECT variables was better than did only clinical variables. The prediction model proved well-calibrated and had good discriminative ability in the training and validation samples. A risk stratification system was built that could effectively classify EGLC patients into two risk groups. DECT could provide independent RFS indicators in patients with EGLC, and the nomogram based on DECT and clinical variables was useful in predicting RFS at several time points. • Dual-energy CT(DECT) variables can predict recurrence-free survival (RFS) after larynx-preserving surgery in patients with early-stage glottic laryngeal cancer (EGLC). • The model that integrates clinical and DECT variables predicted RFS better than did only clinical variables. • A risk stratification system based on the nomogram could effectively classify EGLC patients into two risk groups.
- Supplementary Content
- 10.47176/mjiri.39.110
- Aug 20, 2025
- Medical Journal of the Islamic Republic of Iran
BackgroundEarly detection of lymphatic metastasis (LNM) in gastric cancer (GC) is essential to determine the treatment strategy. Conventional methods exhibit limited efficacy, highlighting the need for more reliable approaches. Deep learning (DL) models show promise for LNM detection in computed tomography (CT); their performance requires comprehensive evaluation. This systematic review and meta-analysis evaluate the diagnostic performance of CT-based DL models for detecting LNM in GC patients. Methods A systematic review and meta-analysis was conducted according to PRISMA-DTA guidelines. PubMed, Embase, and Web of Science were searched up to May 5, 2025. The focus was on studies that used DL models to detect LNM in CT in GC. Using a bivariate random effect model, Pooled estimates were calculated, heterogeneity and publication bias were assessed, and clinical utility was evaluated via Fagan plots and likelihood ratio matrices. Validation type, input data types, CT phases, segmentation techniques, and DL architectures stratified subgroup analyses. The quality was assessed with QUADAS-2. ResultsFrom the 14 included studies, 11 studies with 5296 patients were analyzed. In internal validation, DL feature-based models achieved a pooled area under the curve (AUC) of 0.91 (95% CI: 0.88-0.93), sensitivity of 0.86 (95% CI: 0.75-0.92), and specificity of 0.83 (95% CI: 0.67-0.92). Performance degraded in external validation, with specificity dropping to 0.59 (95% CI: 0.26-0.85). Models that integrated DL features with radiomics features showed similar overall performance but were noted to have a higher confirmatory power. In terms of clinical utility, although the models could significantly alter post-test probabilities, they ultimately lacked the certainty required to serve as standalone diagnostic tools. Conclusion CT-based DL models show high diagnostic accuracy but limited generalizability across external datasets, indicating overfitting. A key finding of this meta-analysis is that pervasive and asymmetric heterogeneity, particularly in specificity, suggests that technical standardization alone is insufficient. Integrating clinical variables reduces heterogeneity; however, prospective, multicenter studies are needed to further enhance reproducibility.
- Research Article
- 10.1177/19322968251355967
- Aug 25, 2025
- Journal of diabetes science and technology
Artificial intelligence (AI) has emerged as a transformative tool for advancing gestational diabetes mellitus (GDM) care, offering dynamic, data-driven methods for early detection, management, and personalized intervention. This systematic review aims to comprehensively explore and synthesize the use of AI models in GDM care, including screening, diagnosis, management, and prediction of maternal and neonatal outcomes. Specifically, we examine (1) study designs and population characteristics; (2) the use of AI across different aspects of GDM care; (3) types of input data used for AI modeling; and (4) AI model types, validation strategies, and performance metrics. A systematic search was conducted across six electronic databases, identifying 126 eligible studies published up to February 2025. Data extraction and quality appraisal were independently conducted by six reviewers, using a modified version of the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool for risk of bias assessment. Among 126 studies, 75% employed retrospective designs, with sample sizes ranging from 17 to over 100 000 participants. Most AI applications (85%) focused on early GDM prediction, while fewer addressed management, outcomes, or monitoring. Classical machine learning dominated (84%), with logistic regression and random forest frequently used. Internal validation was common (68%), but external validation was rare (6%). Our risk of bias appraisal indicated an overall moderate-to-good methodological quality, with notable deficiencies in analysis reporting. AI demonstrates strong potential to improve GDM prediction, screening, and management. Nonetheless, broader validation, enhanced model interpretability, and prospective studies in diverse populations are needed to translate these technologies into clinical practice.
- Research Article
3
- 10.4103/jpbs.jpbs_873_23
- Feb 1, 2024
- Journal of Pharmacy and Bioallied Sciences
Background: Periodontal disease, characterized by inflammation and damage to tooth-supporting structures, poses a prevalent oral health concern. Early detection is crucial for effective management. Materials and Methods: This study comprised of 60 patients with varying degrees of periodontal disease. Intraoral images were captured using digital cameras, and AI algorithms were trained to analyze these images for signs of periodontal disease. Clinical diagnoses, conducted by experienced periodontal specialists, were used as the reference standard. Results: The AI algorithms achieved an overall accuracy of 87% in diagnosing periodontal disease. Sensitivity was 90%, indicating the AI’s ability to correctly identify 90% of true cases, while specificity stood at 84%, demonstrating its capability to accurately classify 84% of non-diseased cases. In comparison, clinical diagnosis yielded an overall accuracy of 86%. Statistical analysis showed no significant difference between AI-based diagnosis and clinical examination (P > 0.05). Conclusion: This study underscores the promising potential of AI algorithms in diagnosing periodontal disease through intraoral image analysis.
- Research Article
- 10.1111/acps.13737
- Jul 20, 2024
- Acta psychiatrica Scandinavica
The goals of this article are as follows. First, to investigate the possibility of detecting autism spectrum disorder (ASD) from text data using the latest generation of machine learning tools. Second, to compare model performance on two datasets of transcribed statements, collected using two different diagnostic tools. Third, to investigate the feasibility of knowledge transfer between models trained on both datasets and check if data augmentation can help alleviate the problem of a small number of observations. We explore two techniques to detect ASD. The first one is based on fine-tuning HerBERT, a BERT-based, monolingual deep transformer neural network. The second one uses the newest, multipurpose text embeddings from OpenAI and a classifier. We apply the methods to two separate datasets of transcribed statements, collected using two different diagnostic tools: thought, language, and communication (TLC) and autism diagnosis observation schedule-2 (ADOS-2). We conducted several cross-dataset experiments in both a zero-shot setting and a setting where models are pretrained on one dataset and then training continues on another to test the possibility of knowledge transfer. Unlike previous studies, the models we tested obtained average results on ADOS-2 data but reached very good performance of the models in TLC. We did not observe any benefits from knowledge transfer between datasets. We observed relatively poor performance of models trained on augmented data and hypothesize that data augmentation by back translation obfuscates autism-specific signals. The quality of machine learning models that detect ASD from text data is improving, but model results are dependent on the type of input data or diagnostic tool.
- Research Article
5
- 10.1093/ehjci/jeab090.046
- Jul 13, 2021
- European Heart Journal - Cardiovascular Imaging
Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Advancing Impact Award scheme of the EPSRC Impact Acceleration Account at King’s College London Background Artificial intelligence (AI) has the potential to facilitate the automation of CMR analysis for biomarker extraction. However, most AI algorithms are trained on a specific input domain (e.g., scanner vendor or hospital-tailored imaging protocol) and lack the robustness to perform optimally when applied to CMR data from other input domains. Purpose To develop and validate a robust CMR analysis tool for automatic segmentation and cardiac function analysis which achieves state-of-the-art performance for multi-vendor short-axis cine CMR images. Methods The current work is an extension of our previously published quality-controlled AI-based tool for cine CMR analysis [1]. We deployed an AI algorithm that is equipped to handle different image sizes and domains automatically - the ‘nnU-Net’ framework [2] - and retrained our tool using the UK Biobank (UKBB) cohort population (n = 4,872) and a large database of clinical CMR studies obtained from two NHS hospitals (n = 3,406). The NHS hospital data came from three different scanner types: Siemens Aera 1.5T (n = 1,419), Philips Achieva 1.5T and 3T (n = 1,160), and Philips Ingenia 1.5T (n = 827). The ‘nnU-net’ was used to segment both ventricles and the myocardium. The proposed method was evaluated on randomly selected test sets from UKBB (n = 488) and NHS (n = 331) and on two external publicly available databases of clinical CMRs acquired on Philips, Siemens, General Electric (GE), and Canon CMR scanners – ACDC (n = 100) [3] and M&Ms (n = 321) [4]. We calculated the Dice scores - which measure the overlap between manual and automatic segmentations - and compared manual vs AI-based measures of biventricular volumes and function. Results Table 1 shows that the Dice scores for the NHS, ACDC, and M&Ms scans are similar to those obtained in the highly controlled, single vendor and single field strength UKBB scans. Although our AI-based tool was only trained on CMR scans from two vendors (Philips and Siemens), it performs similarly in unseen vendors (GE and Canon). Furthermore, it achieves state-of-the-art performance in online segmentation challenges, without being specifically trained on these databases. Table 1 also shows good agreement between manual and automated clinical measures of ejection fraction and ventricular volume and mass. Conclusions We show that our proposed AI-based tool, which combines training on a large-scale multi-domain CMR database with a state-of-the-art AI algorithm, allows us to robustly deal with routine clinical data from multiple centres, vendors, and field strengths. This is a fundamental step for the clinical translation of AI algorithms. Moreover, our method yields a range of additional metrics of cardiac function (filling and ejection rates, regional wall motion, and strain) at no extra computational cost.
- Research Article
- 10.1109/access.2023.3245525
- Jan 1, 2023
- IEEE Access
In this study, we propose a deep neural network (DNN) model that extracts the subgap states in the channel layer of oxide thin-film transistors. We have developed a framework that includes creating a model training set, preprocessing the data, optimizing the model structure, decoding from density-of-state (DOS) parameters to current–voltage (I–V) characteristics, and evaluating the model performance and accuracy of curve fitting. We investigate in detail the effect of data preprocessing methods and model structure on the performance of the model. The primary finding is that the input data type and the last hidden layer significantly affects the performance of the regression model. Using double-type input data composed of several voltages and linear current values is more advantageous than using log-scale current. Moreover, the number of nodes in the last hidden layer of a regression model with multiple output nodes should be large enough to avoid interference between the output values. The proposed model outputs five DOS parameters, and the resulting parameters are decoded to an I–V curve through interpolation based on the nearest 32 data from the given dataset. We evaluate the model performance using the threshold voltage and on-current difference between a target curve and the decoded curve. The proposed model calibrates 97.1% of the 14,400 curves within the threshold voltage difference of 0.2V and on-current error of 5%. Hence, the proposed model is verified to effectively extract DOS parameters with high accuracy based on the current characteristics of oxide thin-film transistors. We expect to improve the efficiency of defect analysis by replacing the iterative manual technology computer aided design (TCAD) curve fitting with an automatic DNN model.
- Research Article
7
- 10.3390/hydrology7040092
- Nov 27, 2020
- Hydrology
In recent years, rain floods caused by abnormal rainfall precipitation have caused several damages in various part of Russia. Precise forecasting of rainfall runoff is essential for both operational practice to optimize the operation of the infrastructure in urbanized territories and for better practices on flood prevention, protection, and mitigation. The network of rain gauges in some Russian regions are very scarce. Thus, an adequate assessment and modeling of precipitation patterns and its spatial distribution is always impossible. In this case, radar data could be efficiently used for modeling of rain floods, which were shown by previous research. This study is aimed to simulate the rain floods in the small catchment in north-west Russia using radar- and ground-based measurements. The investigation area is located the Polomet’ river basin, which is the key object for runoff and water discharge monitoring in Valdai Hills, Russia. Two datasets (rain gauge and weather radar) for precipitation were used in this work. The modeling was performed in open-source Soil and Water Assessment Tool (SWAT) hydrological model with three types of input data: rain gauge, radar, and gauge-adjusted radar data. The simulation efficiency is assessed using the coefficient of determination R2, Nash–Sutcliffe model efficiency coefficient (NSE), by comparing the mean values to standard deviations for the calculated and measured values of water discharge. The SWAT model captures well the different phases of the water regime and demonstrates a good quality of reproduction of the hydrographs of the river runoff of the Polomet’ river. In general, the best model performance was observed for rain gauge data (NSE is up to 0.70 in the Polomet’river-Lychkovo station); however, good results have been also obtained when using adjusted data. The discrepancies between observed and simulated water flows in the model might be explained by the scarce network of meteorological stations in the area of studied basin, which does not allow for a more accurate correction of the radar data.
- Research Article
78
- 10.1016/j.envsoft.2015.09.006
- Sep 22, 2015
- Environmental Modelling & Software
Setting up a hydrological model of Alberta: Data discrimination analyses prior to calibration
- Research Article
67
- 10.1016/j.apr.2021.101168
- Aug 9, 2021
- Atmospheric Pollution Research
PM2.5 concentrations forecasting in Beijing through deep learning with different inputs, model structures and forecast time
- New
- Research Article
- 10.1002/14651858.cd016067
- Nov 5, 2025
- The Cochrane Database of Systematic Reviews
- New
- Research Article
- 10.1002/14651858.cd016123
- Nov 5, 2025
- The Cochrane Database of Systematic Reviews
- New
- Research Article
- 10.1002/14651858.cd015884
- Nov 3, 2025
- The Cochrane Database of Systematic Reviews
- Research Article
- 10.1002/14651858.cd015176.pub2
- Oct 31, 2025
- The Cochrane database of systematic reviews
- Research Article
- 10.1002/14651858.cd015092.pub2
- Oct 30, 2025
- The Cochrane database of systematic reviews
- Research Article
- 10.1002/14651858.cd007911.pub4
- Oct 30, 2025
- The Cochrane database of systematic reviews
- Research Article
- 10.1002/14651858.cd016018
- Oct 30, 2025
- The Cochrane database of systematic reviews
- Research Article
- 10.1002/14651858.cd016017
- Oct 30, 2025
- The Cochrane database of systematic reviews
- Research Article
- 10.1002/14651858.cd015990
- Oct 29, 2025
- The Cochrane Database of Systematic Reviews
- Research Article
- 10.1002/14651858.cd015022.pub2
- Oct 29, 2025
- The Cochrane database of systematic reviews
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.