Abstract

Model selection problems arise while constructing unbiased or asymptotically unbiased estimators of measures known as discrepancies to find the best model. Most of the usual criteria are based on goodness-of-fit and parsimony. They aim to maximize a transformed version of likelihood. For linear regression models with normally distributed error, the situation is less clear when two models are equivalent: are they close to or far from the unknown true model? In this work, based on stochastic simulation and parametric simulation, we study the results of Vuong's test, Cox's test, Akaike's information criterion, Bayesian information criterion, Kullback information criterion and bias corrected Kullback information criterion and the ability of these tests to discriminate between non-nested linear models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call