Abstract

The Journal of Physiology and British Journal of Pharmacology jointly published an editorial series in 2011 to improve standards in statistical reporting and data analysis. It is not known whether reporting practices changed in response to the editorial advice. We conducted a cross-sectional analysis of reporting practices in a random sample of research papers published in these journals before (n = 202) and after (n = 199) publication of the editorial advice. Descriptive data are presented. There was no evidence that reporting practices improved following publication of the editorial advice. Overall, 76-84% of papers with written measures that summarized data variability used standard errors of the mean, and 90-96% of papers did not report exact p-values for primary analyses and post-hoc tests. 76-84% of papers that plotted measures to summarize data variability used standard errors of the mean, and only 2-4% of papers plotted raw data used to calculate variability. Of papers that reported p-values between 0.05 and 0.1, 56-63% interpreted these as trends or statistically significant. Implied or gross spin was noted incidentally in papers before (n = 10) and after (n = 9) the editorial advice was published. Overall, poor statistical reporting, inadequate data presentation and spin were present before and after the editorial advice was published. While the scientific community continues to implement strategies for improving reporting practices, our results indicate stronger incentives or enforcements are needed.

Highlights

  • The accurate communication of scientific discovery depends on transparent reporting of methods and results

  • The random sample of audited papers was reasonably representative of the number of papers published each year in the Journal of Physiology and the British Journal of Pharmacology in the two periods of interest (Table 1)

  • Inadequate data presentation and spin sectional analysis shows there was no substantial improvement in statistical reporting and data presentation in the four years after publication of this editorial advice

Read more

Summary

Introduction

The accurate communication of scientific discovery depends on transparent reporting of methods and results. Information on data variability and results of statistical analyses are required to make accurate inferences. The quality of statistical reporting and data presentation in scientific papers is generally poor. One third of clinical trials in molecular drug interventions and breast cancer selectively report outcomes [1], 60-95% of biomedical research papers report statistical analyses that are not pre-specified or are different to published analysis plans [2], and one third of all graphs published in the prestigious Journal of the American Medical Association cannot be interpreted unambiguously [3].

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call