Abstract
In this paper, we introduce a flexible and widely applicable nonparametric entropy-based testing procedure that can be used to assess the validity of simple hypotheses about a specific parametric population distribution. The testing methodology relies on the characteristic function of the population probability distribution being tested and is attractive in that, regardless of the null hypothesis being tested, it provides a unified framework for conducting such tests. The testing procedure is also computationally tractable and relatively straightforward to implement. In contrast to some alternative test statistics, the proposed entropy test is free from user-specified kernel and bandwidth choices, idiosyncratic and complex regularity conditions, and/or choices of evaluation grids. Several simulation exercises were performed to document the empirical performance of our proposed test, including a regression example that is illustrative of how, in some contexts, the approach can be applied to composite hypothesis-testing situations via data transformations. Overall, the testing procedure exhibits notable promise, exhibiting appreciable increasing power as sample size increases for a number of alternative distributions when contrasted with hypothesized null distributions. Possible general extensions of the approach to composite hypothesis-testing contexts, and directions for future work are also discussed.
Highlights
In this paper, we introduce a flexible and widely applicable nonparametric entropy-based testing procedure that can be used to assess the validity of simple hypotheses about a specific parametric population distribution
Our proposed methodology is based on the well-developed statistical theory and sampling properties of Maximum Entropy (ME), subject to moment condition constraints, which relate to the characteristic functions (CF) of the population probability distribution being tested
If the null hypothesis of interest concerns a parametric family of distributions that ad mits a moment generating function (MGF), MZ (t; θ) = E et Z, and the CF can be replaced by the MGF in specifying the moment conditions underlying the definition of an entropy test (ET) statistic
Summary
“Since there are no a priori arguments for the choice of a particular distribution, one needs to base the choice, and evaluation by statistical means”. Lee (1983). The problem of testing assumptions about the probability distribution underlying random samples of data is an ongoing area of inquiry in statistics and econometrics While this literature has expanded substantially since Pearson (1900), unified and generally applicable omnibus methodologies that exhibit substantial power for testing a wide range of distributional hypotheses are lacking, there have been a number of important developments in this regard (e.g., Bowman and Shenton 1975; Epps and Pulley 1983; Zoubir and Arnold 1996; Doornik and Hansen 2008; Meintanis 2011; Wyłomańska et al 2020). Our proposed methodology is based on the well-developed statistical theory and sampling properties of Maximum Entropy (ME), subject to moment condition constraints, which relate to the characteristic functions (CF) of the population probability distribution being tested. The proposed approach in this paper is unique, in that it utilizes the more general entropyderived sample p j probability weights to define an ECF
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have