BackgroundThe aims of this study were to assess the presence of inverse publication bias (IPB) in adverse events, evaluate the performance of visual examination, and explore the impact of considering effect direction in statistical tests for such assessments.MethodsWe conducted a cross-sectional study using the SMART Safety, the largest dataset for evidence synthesis of adverse events. The visual assessment was performed using contour-enhanced funnel plots, trim-and-fill funnel plots, and sample-size-based funnel plots. Two authors conducted visual assessments of these plots independently, and their agreements were quantified by the kappa statistics. Additionally, IPB was quantitatively assessed using both the one- and two-sided Egger’s and Peters’ tests.ResultsIn the SMART Safety dataset, we identified 277 main meta-analyses of safety outcomes with at least 10 individual estimates after dropping missing data. We found that about 13.7–16.2% of meta-analyses exhibited IPB according to the one-sided test results. The kappa statistics for the visual assessments roughly ranged from 0.3 to 0.5, indicating fair to moderate agreement. Using the one-sided Egger’s test, 57 out of 72 (79.2%) meta-analyses that initially showed significant IPB in the two-sided test changed to non-significant, while the remaining 15 (20.8%) meta-analyses changed from non-significant to significant.ConclusionsOur findings provide supporting evidence of IPB in the SMART Safety dataset of adverse events. They also suggest the importance of researchers carefully accounting for the direction of statistical tests for IPB, as well as the challenges of assessing IPB using statistical methods, especially considering that the number of studies is typically small. Qualitative assessments may be a necessary supplement to gain a more comprehensive understanding of IPB.
Read full abstract