Abstract

AIC is known in the literature as the best for estimation of the regression function, BIC is the best for identification—i.e., when there is no model uncertainty, BIC consistently identifies the true model —but both of them do not necessarily have the optimal out-of-sample predictive ability. In the presence of model uncertainty, Bayesian model averaging (BMA) is the gold standard for making out-of-sample predictions and inferences, but it does not select a model because it averages over a given collection of models. Therefore, BMA does not have interpretability—a property that is often desirable for a statistical procedure. We propose a procedure for model selection that trades off between the prediction accuracy and the interpretability. The procedure seeks a model—thus, it has the interpretability—that has a predictive distribution closest to that of BMA, thus the selected model has better predictive performance than any other model. The suggested procedure can be easily and efficiently implemented by using a Markov Chain Monte Carlo algorithm. We present a number of examples in a linear regression framework for both real data and simulated data. It is shown in these examples that our procedure selects models with optimal predictive performance. In the extreme case where there is no model uncertainty, our procedure selects the same model as BIC does.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.