Abstract

Statistics are a quintessential part of scientific manuscripts. Few journals are free of statistics-related errors. Errors can occur in data reporting and presentation, choosing the appropriate or the most powerful statistical test, misinterpretation or overinterpretations of statistics, and ignoring tests of normality. Statistical software used, one-tailed versus two-tailed tests, and exclusion or inclusion of outliers can all influence outcomes and should be explicitly mentioned. This review presents the corresponding nonparametric tests for common parametric tests, popular misinterpretations of the P value, and usual nuances in data reporting. The importance of distinguishing clinical significance from statistical significance using confidence intervals, number needed to treat, and minimal clinically important difference is highlighted. The problem of multiple comparisons may lead to false interpretations, especially in p-hacking when nonsignificant comparisons are concealed. The review also touches upon a few advanced topics such as heteroscedasticity and multicollinearity in multivariate analyses. Journals have various strategies to minimize inaccuracies, but it is invaluable for authors and reviewers to have good concepts of statistics. Furthermore, it is imperative for the reader to understand these concepts to properly interpret studies and judge the validity of the conclusions independently.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call