Abstract

ABSTRACTThere are several statistical hypothesis tests available for assessing normality assumptions, which is an a priori requirement for most parametric statistical procedures. The usual method for comparing the performances of normality tests is to use Monte Carlo simulations to obtain point estimates for the corresponding powers. The aim of this work is to improve the assessment of 9 normality hypothesis tests. For that purpose, random samples were drawn from several symmetric and asymmetric nonnormal distributions and Monte Carlo simulations were carried out to compute confidence intervals for the power achieved, for each distribution, by two of the most usual normality tests, Kolmogorov–Smirnov with Lilliefors correction and Shapiro–Wilk. In addition, the specificity was computed for each test, again resorting to Monte Carlo simulations, taking samples from standard normal distributions. The analysis was then additionally extended to the Anderson–Darling, Cramer-Von Mises, Pearson chi-square Shapiro–Francia, Jarque–Bera, D'Agostino and uncorrected Kolmogorov–Smirnov tests by determining confidence intervals for the areas under the receiver operating characteristic curves. Simulations were performed to this end, wherein for each sample from a nonnormal distribution an equal-sized sample was taken from a normal distribution. The Shapiro–Wilk test was seen to have the best global performance overall, though in some circumstances the Shapiro–Francia or the D'Agostino tests offered better results. The differences between the tests were not as clear for smaller sample sizes. Also to be noted, the SW and KS tests performed generally quite poorly in distinguishing between samples drawn from normal distributions and t Student distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call