Abstract

Objectives Clinical prediction rules (CPRs) are tools that help clinicians address diagnostic or prognostic uncertainties using symptoms, signs, and test results of a patient. Developing a CPR involves three steps: derivation, validation, and impact study. Poor design, methods, or reporting in these steps can lead to research waste. No study has evaluated whether impact studies are conducted with appropriate design, methods, and reporting. Therefore, we conducted a systematic review to summarize design, methods, and reporting of impact studies of cardiovascular CPRs. We also compared quality of methods and reporting of appropriately designed impact studies with that of control studies evaluating other types of nonpharmacologic intervention. Method We reviewed studies evaluating the impact of cardiovascular CPRs included in the International Register of CPRs. We identified impact studies by conducting forward citation searches of these CPRs. For some cardiovascular CPRs never published in a journal, we searched electronic databases to identify their impact studies. We categorized the design of impact studies as ideal (randomized experiment), alternative (non-randomized experiment excluding uncontrolled before-after study), and inappropriate (all other study designs). For impact studies with appropriate study design, we assessed their methods using the Cochrane risk of bias and ROBINS-I tools. We assessed their report using the CONSORT statement, relevant extensions to the CONSORT statement, and TREND statement. For each impact study with appropriate design, we identified a contemporaneous control study with matching design published in the same journal and compared their methods and reporting. Results We screened a total of 42769 references, 40644 from forward citation searches of 194 CPRs and 2125 from electronic database searches of 4 CPRs. Of 110 of impact studies of cardiovascular CPRs found, 59.1% used inappropriate designs (40 uncontrolled before-after studies, 7 cohort studies, and 18 non-comparative studies), 9.1% used alternative designs (2 non-randomized trials, 4 interrupted time series studies, and 4 repeated measures studies), and 31.8% used ideal designs (12 cluster randomized trials and 23 randomized controlled trials). Overall risk of bias was substantial in 80% (31 of 45) of impact studies. Mean proportion of domains from reporting guidelines that impact studies adhered to was only 44.1% (9.3 of 21). There was no clear difference between impact and matched control studies in the proportion of studies with substantial risk of bias and the mean proportion of reporting domains studies did not comply with. Conclusions We conducted the first systematic review that evaluated design, methods, and reporting of impact studies. We found the vast majority of impact studies either used study designs inappropriate for assessing the effect of using CPRs, applied methods carrying substantial risk of bias, or did not comply with reporting guidelines. All stakeholders of CPR development should take concerted actions to increase the value of research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call