Abstract

ObjectiveWe compared methods used with current recommendations for synthesizing harms in systematic reviews and meta-analyses (SRMAs) of gabapentin. Study Design & SettingWe followed recommended systematic review practices. We selected reliable SRMAs of gabapentin (i.e., met a pre-defined list of methodological criteria) that assessed at least one harm. We extracted and compared methods in four areas: pre-specification, searching, analysis, and reporting. Whereas our focus in this paper is on the methods used, Part 2 examines the results for harms across reviews. ResultsWe screened 4320 records and identified 157 SRMAs of gabapentin, 70 of which were reliable. Most reliable reviews (51/70; 73%) reported following a general guideline for SRMA conduct or reporting, but none reported following recommendations specifically for synthesizing harms. Across all domains assessed, review methods were designed to address questions of benefit and rarely included the additional methods that are recommended for evaluating harms. ConclusionApproaches to assessing harms in SRMAs we examined are tokenistic and unlikely to produce valid summaries of harms to guide decisions. A paradigm shift is needed. At a minimal, reviewers should describe any limitations to their assessment of harms and provide clearer descriptions of methods for synthesizing harms.

Highlights

  • Systematic reviews of randomized controlled trials are often considered the pinnacle of the evidence pyramid for answering research questions related to effectiveness.[1]

  • To be included in our study, we required that reviews: (i) be systematic reviews or meta-analyses; (ii) examine gabapentin for one of its commonly prescribed conditions(on- or off-label), including: alcohol dependence, epilepsy, pain, psychiatric disorders, restless leg syndrome, and vasomotor symptoms, and; (iii) have any results for harms; and (iv) be reliable in methods

  • The defaults for meta-analysis in most statistical programs used to conduct meta-analysis (e.g., RevMan, R, Stata) are inversevariance models, which are often biased for analyzing rare events,[36,37,38,39] we found the most common meta-analysis model was Mantel-Haenszel (19/44, 43%) with only 9/44 (21%) reviews not specifying what model was used for meta-analysis, suggesting most systematic reviewers know to change the model from the default when analyzing harms

Read more

Summary

Introduction

Systematic reviews of randomized controlled trials are often considered the pinnacle of the evidence pyramid for answering research questions related to effectiveness.[1] Guidelines recommend that potential harms (Box 1) be assessed alongside potential benefits to avoid one-sided summaries of evidence.[2] A given systematic review might take one of three approaches to assessing harms: prespecifying all harms of interest, not pre-specifying any harms, or a hybrid approach (Box 1).[3] The choice of approach, might depend of the intervention and setting, which can dictate whether an outcome is treated as a potential harm or benefit. Weight gain is considered a harm in trials of antipsychotics but might be a benefit in trials of interventions for eating disorders. These approaches have complementary strengths and weaknesses. These approaches have complementary strengths and weaknesses. [3]

Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.