Abstract

Statistical inference in the form of hypothesis tests and confidence intervals often assumes that the underlying distribution is normal. Similarly, many signal processing techniques rely on the assumption that a stationary time series is normal. As a result, a number of tests have been proposed in the literature for detecting departures from normality. In this article we develop a novel approach to the problem of testing normality by constructing a statistical test based on the Edgeworth expansion, which approximates a probability distribution in terms of its cumulants. By modifying one term of the expansion, we define a test statistic which includes information on the first four moments. We perform a comparison of the proposed test with existing tests for normality by analyzing different platykurtic and leptokurtic distributions including generalized Gaussian, mixed Gaussian, α-stable and Student’s t distributions. We show for some considered sample sizes that the proposed test is superior in terms of power for the platykurtic distributions whereas for the leptokurtic ones it is close to the best tests like those of D’Agostino-Pearson, Jarque-Bera and Shapiro-Wilk. Finally, we study two real data examples which illustrate the efficacy of the proposed test.

Highlights

  • Testing the hypothesis of normality is one of the fundamental procedures of the statistical analysis

  • Let us mention ideas based on the empirical characteristic function [10], on the dependence between moments that characterizes normal distributions [11], or on the Noughabi’s entropy estimator [12]

  • Developing an omnibus test for normality of a random sample is a challenging and important task in signal processing that is difficult for symmetric alternatives and those that are close to the normal distribution

Read more

Summary

Introduction

Testing the hypothesis of normality is one of the fundamental procedures of the statistical analysis. There is a large number of normality tests Some of them such as the χ2 goodness-of-fit test [1] with its variants, the Kolmogorov-Smirnov (KS) one-sample cumulative probability test [2], the Shapiro-Wilk (SW) test [3], D’Agostino-Pearson (DP) test [4] and Jarque-Bera (JB) test [5] are nowadays considered classical. These tests are based on comparing the distribution of the observed data to the expected distribution (χ2), on measuring the distance between the empirical and analytical distribution function (KS), on taking into account some transformations of moments of the data like skewness and kurtosis (DP and JB), or on calculating some function of order statistics (SW). Let us mention ideas based on the empirical characteristic function [10], on the dependence between moments that characterizes normal distributions [11], or on the Noughabi’s entropy estimator [12]

Objectives
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call