Abstract

Meta-analyses can be compromised by studies' internal biases (e.g., confounding in nonrandomized studies) as well as publication bias. These biases often operate nonadditively: publication bias that favors significant, positive results selects indirectly for studies with more internal bias. We propose sensitivity analyses that address two questions: (1) "For a given severity of internal bias across studies and of publication bias, how much could the results change?"; and (2) "For a given severity of publication bias, how severe would internal bias have to be, hypothetically, to attenuate the results to the null or by a given amount?" These methods consider the average internal bias across studies, obviating specifying the bias in each study individually. The analyst can assume that internal bias affects all studies, or alternatively that it only affects a known subset (e.g., nonrandomized studies). The internal bias can be of unknown origin or, for certain types of bias in causal estimates, can be bounded analytically. The analyst can specify the severity of publication bias or, alternatively, consider a "worst-case" form of publication bias. Robust estimation methods accommodate non-normal effects, small meta-analyses, and clustered estimates. As we illustrate by re-analyzing published meta-analyses, the methods can provide insights that are not captured by simply considering each bias in turn. An R package implementing the methods is available (multibiasmeta).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call