Abstract

The assumption of univariate and multivariate normality is implicit in most of the statistical procedures routinely used in the analysis of univariate and multivariate data. Now it is well recognized that, in general, the assumption of normality is at best suspect; e.g., see Geary (1947) , Pearson (1929) , Jeffreys (1961) , Mudholkar and Srivastava (2000a) and references therein. Furthermore, it is also well established that when the assumption of normality is violated most of the normal theory procedures lose validity, i.e., Type I error control, or become highly inefficient in terms of power. Numerous goodness-of-fit methods to test the assumption of univariate normality exist in the literature but no single test uniformly dominates all others. However, several theoretical and simulation justifications published in the literature indicate that the Shapiro-Wilk test is reasonable and appropriate in most situations of practical importance. The assumption of multivariate normality is harder to expect and justify since it implies joint normality, in addition to the marginal normality, of the components. This structural complexity may be a reason for a time lag in the development of goodness-of-fit tests for multivariate normality. However, the last two decades have seen advances leading to several competing tests of multivariate normality. In addition, it is seen that, as compared with the univariate methods, the multivariate data analysis methods are more prone to becoming invalid in terms of Type I error control and inefficient in terms of power when the normality assumption is violated. The purpose of this article is to present an overview of the methods for testing univariate and multivariate normality and to indicate their relative strengths and weaknesses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call