Abstract

As a supplement to summary statistics of information criteria, the closeness of two or more competing non-nested models can be compared under a procedure that is more general than that proposed in Vuong (1989); measures of closeness other than the Kullback–Leibler divergence are allowed. Large deviation theory is used to obtain a bound of the power of rejecting the null hypothesis that the two models are equally close to the true model. Such a bound can be expressed in terms of a constant γ∈[0,1); γ can be computed empirically without any knowledge of the data generating mechanism. Additionally, based on the constant γ, the procedures constructed based on different measures of distance can be compared on their abilities to conclude a difference between two models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call