Supplementing systematic review findings with healthcare system data: pilot projects from the Agency for Healthcare Research and Quality Evidence-based Practice Center program

  • Abstract
  • References
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Supplementing systematic review findings with healthcare system data: pilot projects from the Agency for Healthcare Research and Quality Evidence-based Practice Center program

ReferencesShowing 10 of 14 papers
  • Open Access Icon
  • Cite Count Icon 21
  • 10.1530/eje-21-1088
Using electronic health record data for clinical research: a quick guide.
  • Apr 1, 2022
  • European Journal of Endocrinology
  • Sophie H Bots + 2 more

  • Cite Count Icon 181
  • 10.1111/j.1532-5415.2010.03032.x
Fighting Against Age Discrimination in Clinical Trials
  • Sep 1, 2010
  • Journal of the American Geriatrics Society
  • Antonio Cherubini + 4 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 96
  • 10.1186/s13063-020-4139-0
Exclusion rates in randomized controlled trials of treatments for physical conditions: a systematic review
  • Feb 26, 2020
  • Trials
  • Jinzhang He + 2 more

  • Cite Count Icon 122
  • 10.7326/m19-0533
Long-Term Drug Therapy and Drug Discontinuations and Holidays for Osteoporosis Fracture Prevention: A Systematic Review.
  • Apr 23, 2019
  • Annals of Internal Medicine
  • Howard A Fink + 12 more

  • Cite Count Icon 961
  • 10.1001/jama.297.11.1233
Eligibility Criteria of Randomized Controlled Trials Published in High-Impact General Medical Journals
  • Mar 21, 2007
  • JAMA
  • Harriette G C Van Spall + 3 more

  • Cite Count Icon 172
  • 10.1097/aog.0b013e3182a9ca67
Exclusion of Pregnant Women From Industry-Sponsored Clinical Trials
  • Nov 1, 2013
  • Obstetrics & Gynecology
  • Kristine E Shields + 1 more

  • Open Access Icon
  • Cite Count Icon 73
  • 10.1136/bmjebm-2022-111952
Adapt or die: how the pandemic made the shift from EBM to EBM+ more urgent
  • Jul 19, 2022
  • BMJ Evidence-Based Medicine
  • Trisha Greenhalgh + 4 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 229
  • 10.2196/jmir.9134
Possible Sources of Bias in Primary Care Electronic Health Record Data Use and Reuse
  • May 29, 2018
  • Journal of Medical Internet Research
  • Robert A Verheij + 3 more

  • Open Access Icon
  • Cite Count Icon 50
  • 10.1136/bmj.329.7456.2
Benefits and harms of drug treatments
  • Jul 1, 2004
  • BMJ
  • Jan P Vandenbroucke

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 10
  • 10.1038/s41746-023-00891-y
Impact of primary to secondary care data sharing on care quality in NHS England hospitals
  • Aug 14, 2023
  • NPJ Digital Medicine
  • Joe Zhang + 3 more

Similar Papers
  • Single Report
  • 10.23970/ahrqepcwhitepapersupplementing
Supplementing Systematic Review Findings With Healthcare System Data: Pilot Projects From the Agency for Healthcare Research and Quality Evidence-based Practice Center Program
  • Jan 7, 2025
  • Haley K Holmer + 9 more

Objectives. The Agency for Healthcare Research and Quality (AHRQ), through the Evidence-based Practice Center (EPC) Program, aims to provide health system decision makers with the highest-quality evidence to inform clinical decisions. However, limitations in the literature may lead to inconclusive findings in EPC systematic reviews (SRs). The EPC Program conducted pilot projects to understand the feasibility, benefits, and challenges of utilizing health system data to augment SR findings to support confidence in healthcare decision making based on real-world experiences. Study Design and Setting. Three contractors (each an EPC located at a different health system) selected a recently completed systematic review conducted by their center and identified an evidence gap that electronic health record (EHR) data might address. All pilot project topics addressed clinical questions: infantile epilepsy, migraine, and hip fracture, respectively. EPCs also tracked additional resources needed to conduct supplemental analyses. The workgroup met monthly from 2022 to 2023 to discuss challenges and lessons learned from the pilot projects. Results. Two supplemental data analyses filled an evidence gap identified in the systematic reviews (raised certainty of evidence, improved applicability), and the third filled a health system knowledge gap. Project challenges fell under three themes: regulatory and logistical issues, data collection and analysis, and interpretation and presentation of findings. Limited ability to capture key clinical variables given inconsistent or missing data within the EHR was a major limitation. Conducting supplemental data analysis alongside an SR added considerable time and resources to the review process (estimated total hours to complete pilot projects ranged from 283 to 595 across EPCs), and the increased effort and resources added limited incremental value. Conclusion. Supplementing existing systematic reviews with analyses of EHR data is feasible, but resource intensive, and requires specialized skillsets throughout the process. While using EHR data for research has immense potential to generate real-world evidence and fill knowledge gaps, these data may not yet be ready for routine use alongside systematic reviews.

  • Single Report
  • 10.23970/ahrqepchealthsystempanel
Health System Panel To Inform and Encourage Use of Evidence Reports: Findings From the Implementation and Evaluation of Two Evidence-Based Tools
  • Aug 30, 2022
  • Kathryn Paez + 7 more

Objectives. The Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) Program wants learning health systems (LHSs) to use the evidence from its reports to improve patient care. In 2018, to improve uptake of EPC Program findings, the EPC Program developed a project to enhance LHSs’ adoption of evidence to improve the quality and effectiveness of patient care. AHRQ contracted with the American Institutes for Research (AIR) and its partners to convene a panel of senior leaders from 11 LHSs to guide the development of tools to help health systems use findings from EPC evidence reports. The panel’s contributions led to developing, implementing, and evaluating two electronic tools to make the EPC report findings more accessible. AIR evaluated the LHSs’ use of the tools to understand (1) LHSs’ experiences with and impressions of the tools, (2) how well the tools helped them access evidence, and (3) how well the tools addressed barriers to LHS use of the EPC reports and barriers to applying the evidence from the reports. Data sources. (1) Implementation meetings with 6 LHSs; (2) interviews with 27 health system leaders and clinical staff who used the tools; and (3) website utilization metrics. Results. The tools were efficient and useful sources of summarized evidence to (1) inform systems change, (2) educate trainees and clinicians, (3) inform research, and (4) support shared decision making with patients and families. Clinical leaders appreciated the thoroughness and quality of the evidence reviews and view AHRQ as a trusted source of information. Participants found both tools to be valuable and complementary. Participants suggested optimizing the content for mobile device use to facilitate health system uptake of the tools. In addition, they felt it would be helpful to have training resources about tool navigation and interpreting the statistical content in the tools. Conclusions. The evaluation shows that LHSs find the tools to be useful resources for making the EPC Program reports more accessible to health system leaders. The tools have the potential to meet some, but not all, LHS evidence needs, while exposing health system leaders to AHRQ as a resource to help meet their information needs. The ability of the EPC reports to support LHSs in improving the quality of care is limited by the strength and robustness of the evidence, as well as the relevance of the report topics to patient care challenges faced by LHSs.

  • Research Article
  • 10.1017/s0266462318001940
PP22 How Do Health System Leaders Use Evidence To Inform Action?
  • Jan 1, 2018
  • International Journal of Technology Assessment in Health Care
  • Matthew D Mitchell + 13 more

Introduction:The US Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) program sponsors the development of systematic reviews to inform clinical policy and practice. The EPC program sought to better understand how health systems identify and use this evidence.Methods:Representatives from eleven EPCs, the EPC Scientific Resource Center, and AHRQ developed a semi-structured interview script to query a diverse group of nine Key Informants (KIs) involved in health system quality, safety and process improvement about how they identify and use evidence. Interviews were transcribed and qualitatively summarized into key themes.Results:All KIs reported that their organizations have either centralized quality, safety, and process improvement functions within their system, or they have partnerships with other organizations to conduct this work. There was variation in how evidence was identified, with larger health systems having medical librarians and central bureaus to gather and disseminate information and smaller systems having local chief medical officers or individual clinicians do this work. KIs generally prefer guidelines, especially those with treatment algorithms, because they are actionable. They like systematic reviews because they efficiently condense study results and reconcile conflicting data. They prefer information from systematic reviews to be presented as short digestible summaries with the full report available on demand. KIs preferred systematic reviews from reputable entities and those without commercial bias. Some of the challenges KIs reported include how to resolve conflicting evidence, the generalizability of evidence to local needs, determining whether the evidence is up-to-date, and the length of time required to generate reviews. The topics of greatest interest included predictive analytics, high-value care, advance care planning, and care coordination. To increase awareness of AHRQ EPC reviews, KIs suggest alerting people at multiple levels in a health-system when new evidence reports are available and making reports easier to find in common search engines.Conclusions:Systematic reviews are valued by health system leaders. To be most useful they should be easy to locate and available in different formats targeted to the needs of different audiences.

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.jcjq.2019.10.002
AHRQ EPC Series on Improving Translation of Evidence: Use of a Clinical Pathway for C. Difficile Treatment to Facilitate the Translation of Research Findings into Practice
  • Oct 28, 2019
  • The Joint Commission Journal on Quality and Patient Safety
  • Emilia J Flores + 5 more

AHRQ EPC Series on Improving Translation of Evidence: Use of a Clinical Pathway for C. Difficile Treatment to Facilitate the Translation of Research Findings into Practice

  • Single Report
  • 10.23970/ahrqepcwhitepaperimproving
Improving the Utility of Evidence Synthesis for Decision Makers in the Face of Insufficient Evidence
  • Apr 16, 2021
  • M Hassan Murad + 16 more

Background: Healthcare decision makers strive to operate on the best available evidence. The Agency for Healthcare Research and Quality Evidence-based Practice Center (EPC) Program aims to support healthcare decision makers by producing evidence reviews that rate the strength of evidence. However, the evidence base is often sparse or heterogeneous, or otherwise results in a high degree of uncertainty and insufficient evidence ratings. Objective: To identify and suggest strategies to make insufficient ratings in systematic reviews more actionable. Methods: A workgroup comprising EPC Program members convened throughout 2020. We conducted interative discussions considering information from three data sources: a literature review for relevant publications and frameworks, a review of a convenience sample of past systematic reviews conducted by the EPCs, and an audit of methods used in past EPC technical briefs. Results: Several themes emerged across the literature review, review of systematic reviews, and review of technical brief methods. In the purposive sample of 43 systematic reviews, the use of the term “insufficient” covered both instances of no evidence and instances of evidence being present but insufficient to estimate an effect. The results of the literature review and review of the EPC Program systematic reviews illustrated the importance of clearly stating the reasons for insufficient evidence. Results of both the literature review and review of systematic reviews highlighted the factors decision makers consider when making decisions when evidence of benefits or harms is insufficient, such as costs, values, preferences, and equity. We identified five strategies for supplementing systematic review findings when evidence on benefit or harms is expected to be or found to be insufficient, including: reconsidering eligible study designs, summarizing indirect evidence, summarizing contextual and implementation evidence, modelling, and incorporating unpublished health system data. Conclusion: Throughout early scoping, protocol development, review conduct, and review presentation, authors should consider five possible strategies to supplement potential insufficient findings of benefit or harms. When there is no evidence available for a specific outcome, reviewers should use a statement such as “no studies” instead of “insufficient.” The main reasons for insufficient evidence rating should be explicitly described.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 22
  • 10.1007/s11606-012-2053-1
Methods Guide for Authors of Systematic Reviews of Medical Tests: A Collaboration Between the Agency for Healthcare Research and Quality (AHRQ) and the Journal of General Internal Medicine
  • May 31, 2012
  • Journal of General Internal Medicine
  • Gerald W Smetana + 3 more

Methods Guide for Authors of Systematic Reviews of Medical Tests: A Collaboration Between the Agency for Healthcare Research and Quality (AHRQ) and the Journal of General Internal Medicine

  • Research Article
  • 10.14219/jada.archive.1934.0114
The Dental Curriculum
  • May 1, 1934
  • The Journal of the American Dental Association

The Dental Curriculum

  • Research Article
  • Cite Count Icon 6
  • 10.1002/sim.8926
Robust estimation of heterogeneous treatment effects using electronic health record data.
  • Mar 19, 2021
  • Statistics in Medicine
  • Ruohong Li + 2 more

Estimation of heterogeneous treatment effects is an essential component of precision medicine. Model and algorithm-based methods have been developed within the causal inference framework to achieve valid estimation and inference. Existing methods such as the A-learner, R-learner, modified covariates method (with and without efficiency augmentation), inverse propensity score weighting, and augmented inverse propensity score weighting have been proposed mostly under the square error loss function. The performance of these methods in the presence of data irregularity and high dimensionality, such as that encountered in electronic health record (EHR) data analysis, has been less studied. In this research, we describe a general formulation that unifies many of the existing learners through a common score function. The new formulation allows the incorporation of least absolute deviation (LAD) regression and dimension reduction techniques to counter the challenges in EHR data analysis. We show that under a set of mild regularity conditions, the resultant estimator has an asymptotic normal distribution. Within this framework, we proposed two specific estimators for EHR analysis based on weighted LAD with penalties for sparsity and smoothness simultaneously. Our simulation studies show that the proposed methods are more robust to outliers under various circumstances. We use these methods to assess the blood pressure-lowering effects of two commonly used antihypertensive therapies.

  • Single Report
  • Cite Count Icon 1
  • 10.23970/ahrqepcmethengageimproving
Improving Access to and Usability of Systematic Review Data for Health Systems Guidelines Development
  • Feb 28, 2019
  • Annette M Totten + 4 more

Improving Access to and Usability of Systematic Review Data for Health Systems Guidelines Development

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.jcjq.2019.05.002
AHRQ Series on Improving Translation of Evidence: Linking Evidence Reports and Performance Measures to Help Learning Health Systems Use New Information for Improvement
  • Oct 1, 2019
  • The Joint Commission Journal on Quality and Patient Safety
  • C Michael White + 3 more

AHRQ Series on Improving Translation of Evidence: Linking Evidence Reports and Performance Measures to Help Learning Health Systems Use New Information for Improvement

  • Front Matter
  • Cite Count Icon 2
  • 10.1016/j.jcjq.2017.06.002
Optimizing Care Transitions: Adapting Evidence-Informed Solutions to Local Contexts
  • Jul 24, 2017
  • The Joint Commission Journal on Quality and Patient Safety
  • Lianne P Jeffs

Optimizing Care Transitions: Adapting Evidence-Informed Solutions to Local Contexts

  • Research Article
  • Cite Count Icon 9
  • 10.1177/0272989x20985752
Applied Methods for Estimating Transition Probabilities from Electronic Health Record Data.
  • Feb 1, 2021
  • Medical decision making : an international journal of the Society for Medical Decision Making
  • Patricia J Rodriguez + 4 more

Electronic health record (EHR) data contain longitudinal patient information and standardized diagnostic codes. EHR data may be useful for estimating transition probabilities for state-transition models, but no guidelines exist on appropriate methods. We applied 3 potential methods to estimate transition probabilities from EHR data, using pediatric eating disorders (EDs) as a case study. We obtained EHR data from PEDsnet, which includes 8 US children's hospitals. Data included inpatient, outpatient, and emergency department visits for all patients with an ED. We mapped diagnoses to 3 ED health states: anorexia nervosa, bulimia nervosa, and other specified feeding or eating disorder. We estimated 1-y transition probabilities for males and females using 3 approaches: simple first-last proportions, a multistate Markov (MSM) model, and independent survival models. Transition probability estimates varied widely between approaches. The first-last proportion approach estimated higher probabilities of remaining in the same health state, while the MSM and independent survival approaches estimated higher probabilities of transitioning to a different health state. All estimates differed substantially from published literature. As a source of health state information, EHR data are incomplete and sometimes inaccurate. EHR data were especially challenging for EDs, limiting the estimation and interpretation of transition probabilities. The 3 approaches produced very different transition probability estimates. Estimates varied considerably from published literature and were rescaled and calibrated for use in a microsimulation model. Estimation of transition probabilities from EHR data may be more promising for diseases that are well documented in the EHR. Furthermore, clinicians and health systems should work to improve documentation of ED in the EHR. Further research is needed on methods for using EHR data to inform transition probabilities.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1055/s-0040-1713421
Using Electronic Health Record Data to Support Research and Quality Improvement: Practical Guidance from a Qualitative Investigation
  • Jan 1, 2020
  • ACI Open
  • Daria F Ferro + 1 more

Objective The aim of the study is to identify how academic health centers (AHCs) have established infrastructures to leverage electronic health record (EHR) data to support research and quality improvement (QI). Methods Phone interviews of 18 clinical informaticians with expertise gained over three decades at 24 AHCs were transcribed for qualitative analysis on three levels. In Level I, investigators independently used NVivo software to code and identify themes expressed in the transcripts. In Level II, investigators reexamined coded transcripts and notes and contextualized themes in the learning health system paradigm. In Level III, an informant subsample validated and supplemented findings. Results Level I analysis yielded six key “determinants”—Institutional Relationships, Resource Availability, Data Strategy, Response to Change, Leadership Support, and Degree of Mission Alignment—which, according to local context, affect use of EHR data for research and QI. Level II analysis contextualized these determinants in a practical frame of reference, yielding a model of learning health system maturation, over-arching key concepts, and self-assessment questions to guide AHC progress toward becoming a learning health system. Level III informants validated and supplemented findings. Discussion Drawn from the collective knowledge of experienced informatics professionals, the findings and tools described offer practical support to help clinical informaticians leverage EHR data for research and QI in AHCs. Conclusion The learning health system model builds on the tripartite AHC mission of research, education, and patient care. AHCs must deliberately transform into learning health systems to capitalize fully on EHR data as a staple of health learning.

  • Research Article
  • Cite Count Icon 6
  • 10.3389/fphar.2022.845949
Continuity and Completeness of Electronic Health Record Data for Patients Treated With Oral Hypoglycemic Agents: Findings From Healthcare Delivery Systems in Taiwan.
  • Apr 4, 2022
  • Frontiers in Pharmacology
  • Chien-Ning Hsu + 7 more

Objective: To evaluate the continuity and completeness of electronic health record (EHR) data, and the concordance of select clinical outcomes and baseline comorbidities between EHR and linked claims data, from three healthcare delivery systems in Taiwan. Methods: We identified oral hypoglycemic agent (OHA) users from the Integrated Medical Database of National Taiwan University Hospital (NTUH-iMD), which was linked to the National Health Insurance Research Database (NHIRD), from June 2011 to December 2016. A secondary evaluation involved two additional EHR databases. We created consecutive 90-day periods before and after the first recorded OHA prescription and defined patients as having continuous EHR data if there was at least one encounter or prescription in a 90-day interval. EHR data completeness was measured by dividing the number of encounters in the NTUH-iMD by the number of encounters in the NHIRD. We assessed the concordance between EHR and claims data on three clinical outcomes (cardiovascular events, nephropathy-related events, and heart failure admission). We used individual comorbidities that comprised the Charlson comorbidity index to examine the concordance of select baseline comorbidities between EHRs and claims. Results: We identified 39,268 OHA users in the NTUH-iMD. Thirty-one percent (n = 12,296) of these users contributed to the analysis that examined data continuity during the 6-month baseline and 24-month follow-up period; 31% (n = 3,845) of the 12,296 users had continuous data during this 30-month period and EHR data completeness was 52%. The concordance of major cardiovascular events, nephropathy-related events, and heart failure admission was moderate, with the NTU-iMD capturing 49–55% of the outcome events recorded in the NHIRD. The concordance of comorbidities was considerably different between the NTUH-iMD and NHIRD, with an absolute standardized difference >0.1 for most comorbidities examined. Across the three EHR databases studied, 29–55% of the OHA users had continuous records during the 6-month baseline and 24-month follow-up period. Conclusion: EHR data continuity and data completeness may be suboptimal. A thorough evaluation of data continuity and completeness is recommended before conducting clinical and translational research using EHR data in Taiwan.

  • Research Article
  • 10.1200/jco.2023.41.16_suppl.6514
Enhancement in line of therapy (LoT) derivation from real-world data (RWD) from electronic health records (EHR) via integration of medical claims data.
  • Jun 1, 2023
  • Journal of Clinical Oncology
  • Smita Agrawal + 7 more

6514 Background: Clinical RWD derived from EHRs is becoming increasingly important for clinical research, trial design, regulatory decisions etc. These applications require identification of lines of therapy (LoT) which are typically not captured in EHR and must be abstracted from other clinical and medication data. EHR data has significant missingness which can be complemented with other data sources such as medical claims data. In this study, we demonstrate how our proprietary line of therapy algorithms for solid cancers show significant improvements when built using integrated EHR and claims data when compared to EHR data alone. Methods: For this analysis, ConcertAI’s RWD360 dataset integrated with a large administrative open-claims dataset (>90% overlap) for 14 solid cancer indications (Breast, Bladder, Lung, Prostate, Pancreas, Melanoma, Liver, Head & Neck, Renal, Colorectal, Melanoma, Ovarian, Thyroid, Endometrial) was used. The date of advanced/metastatic diagnosis used as the index date for LoTs was derived from the EHR data and medications from both EHR and claims data were used. We ran our LoT algorithms on EHR data with and without claims data and evaluated the impact of integrating claims data on the quantity and quality of LoT output. Results: The inclusion of medication data from claims significantly increased (7-22%) the number of patients for which LoTs could be extracted from the EHR data. Furthermore, we observed increases in number of lines per patient, length of lines and medications per line across cohorts. The distance between index date and 1st line start date was shortened in a subset (2-12%) of patients as a result. In a small fraction of cases, we even observed removal of false lines as some of the lines moved to adjuvant/neoadjuvant setting by filling in missing medication from claims. Overall, 7-39% patients in the LoT cohorts were impacted by addition of claims. Results for a few cancer types are presented in Table 1. We also compared the top LoTs derived from the integrated dataset against the standard of care for that cancer and observed very good concordance. Conclusions: Deriving LoTs by integrating data from multiple data sources such as EHR and claims can significantly improve its accuracy. [Table: see text]

More from: Journal of Clinical Epidemiology
  • New
  • Front Matter
  • 10.1016/j.jclinepi.2025.112044
David Sackett Young Investigator Award, Peer Reviewer of the Year Award, and Peer Reviewer Acknowledgment.
  • Nov 6, 2025
  • Journal of clinical epidemiology

  • Research Article
  • 10.1016/j.jclinepi.2025.111928
Scoping review authors view knowledge user consultations as beneficial but not without challenges: a qualitative study.
  • Nov 1, 2025
  • Journal of clinical epidemiology
  • Elaine Toomey + 8 more

  • Discussion
  • 10.1016/j.jclinepi.2025.111956
Letter to the editor: The necessity of specifying measurement models: a critical reappraisal specification issues in PRECIOUS.
  • Nov 1, 2025
  • Journal of clinical epidemiology
  • Xin Meng + 4 more

  • Research Article
  • 10.1016/j.jclinepi.2025.111932
Prevalence and predictors of potentially inappropriate prescribing using codified STOPP-START and Beers criteria: a retrospective cohort study in Ontario's older population.
  • Nov 1, 2025
  • Journal of clinical epidemiology
  • Lise M Bjerre + 12 more

  • Discussion
  • 10.1016/j.jclinepi.2025.112041
Response to: "Identifying variables that independently predict…" is not a well-defined research task.
  • Nov 1, 2025
  • Journal of clinical epidemiology
  • Brett P Dyer

  • Addendum
  • 10.1016/j.jclinepi.2025.111969
Corrigendum to GRADE guidance 39: using GRADE-ADOLOPMENT to adopt, adapt or create contextualized recommendations from source guidelines and evidence syntheses [Journal of Clinical Epidemiology 81 (2024) 111494
  • Nov 1, 2025
  • Journal of clinical epidemiology
  • Miloslav Klugar + 50 more

  • Research Article
  • 10.1016/j.jclinepi.2025.112040
Methodological review reveals essential gaps and inconsistencies in clinical claims, effects and outcomes in HTA reviews of diagnostic tests.
  • Nov 1, 2025
  • Journal of clinical epidemiology
  • Jacqueline Dinnes + 7 more

  • Research Article
  • 10.1016/j.jclinepi.2025.112043
Commentary: "Identifying variables that independently predict…" is not a well-defined research task.
  • Nov 1, 2025
  • Journal of clinical epidemiology
  • John B Carlin

  • Research Article
  • 10.1016/j.jclinepi.2025.112039
Preference-based controlled design: toward increased patients' engagement, efficiency and external validity of cardiovascular clinical trials.
  • Oct 31, 2025
  • Journal of clinical epidemiology
  • Bjorn Redfors + 3 more

  • Research Article
  • 10.1016/j.jclinepi.2025.112038
Most methodological characteristics do not exaggerate effect estimates in nutrition RCTs: findings from a meta-epidemiological study.
  • Oct 31, 2025
  • Journal of clinical epidemiology
  • Gina Bantle + 4 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon