Abstract

BackgroundMany randomised trials have count outcomes, such as the number of falls or the number of asthma exacerbations. These outcomes have been treated as counts, continuous outcomes or dichotomised and analysed using a variety of analytical methods. This study examines whether different methods of analysis yield estimates of intervention effect that are similar enough to be reasonably pooled in a meta-analysis.MethodsData were simulated for 10,000 randomised trials under three different amounts of overdispersion, four different event rates and two effect sizes. Each simulated trial was analysed using nine different methods of analysis: rate ratio, Poisson regression, negative binomial regression, risk ratio from dichotomised data, survival to the first event, two methods of adjusting for multiple survival times, ratio of means and ratio of medians. Individual patient data was gathered from eight fall prevention trials, and similar analyses were undertaken.ResultsAll methods produced similar effect sizes when there was no difference between treatments. Results were similar when there was a moderate difference with two exceptions when the event became more common: (1) risk ratios computed from dichotomised count outcomes and hazard ratios from survival analysis of the time to the first event yielded intervention effects that differed from rate ratios estimated from the negative binomial model (reference model) and (2) the precision of the estimates differed depending on the method used, which may affect both the pooled intervention effect and the observed heterogeneity.The results of the case study of individual data from eight trials evaluating exercise programmes to prevent falls in older people supported the simulation study findings.ConclusionsInformation about the differences in treatments is lost when event rates increase and the outcome is dichotomised or time to the first event is analysed otherwise similar results are obtained. Further research is needed to examine the effect of differing variances from the different methods on the confidence intervals of pooled estimates.Electronic supplementary materialThe online version of this article (doi:10.1186/s13643-015-0144-x) contains supplementary material, which is available to authorized users.

Highlights

  • Many randomised trials have count outcomes, such as the number of falls or the number of asthma exacerbations

  • Simulations with a very low mean Simulations with a very small mean and no overdispersion yielded estimates for all analytical methods that were similar to the negative binomial rate ratio (Fig. 1, Table 2)

  • The percentile-based confidence intervals (CIs) around the estimates are very similar for all the methods (Fig. 1)

Read more

Summary

Introduction

Many randomised trials have count outcomes, such as the number of falls or the number of asthma exacerbations. Often the outcomes measured in medical research are count outcomes. These measure the number of times a particular event happens to an individual in a defined period. Examples of count outcomes include the number of falls by the individual, the number of asthma exacerbations or the number of incontinence episodes. These outcomes are commonly measured in randomised controlled trials (RCTs) to determine the effect of an intervention. There are many ways of summarising the difference between interventions when the outcome is a count outcome [1,2,3], such as:

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call