Abstract

AbstractThe Bayesian analysis of the variable selection problem in linear regression when using objective priors needs some form of encompassing the class of all submodels of the full linear model as they are nonnested models. After we provide a nested setting, objective intrinsic priors suitable for computing model posterior probabilities, on which the selection is based, can be derived.The way of encompassing the models is not unique and there is no clear indications for the optimal way. Typically, the class of linear models are encompassed into the full model.In this paper, we explore a new way of encompassing the class of linear models that consequently produces a new method for variable selection. This method seems to have some advantages with respect to the usual one.Specific intrinsic priors and model posterior probabilities are provided along with some of their main properties. Comparisons are made with R 2 and adjusted R 2, along with other frequentist methods for variable selection as lasso. Some illustrations on simulated and real data are provided.Keywords and phrases:Calibration curvedetermination coefficientg-priorsintrinsic priorslasso criterionmodel selectionnormal linear modelreference priors

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call