Abstract

It is quite common in statistical modeling to select a model and make inference as if the model had been known in advance; i.e. ignoring model selection uncertainty. The resulted estimator is called post-model selection estimator (PMSE) whose properties are hard to derive. Conditioning on data at hand (as it is usually the case), Bayesian model selection is free of this phenomenon. This paper is concerned with the properties of Bayesian estimator obtained after model selection when the frequentist (long run) performances of the resulted Bayesian estimator are of interest. The proposed method, using Bayesian decision theory, is based on the well known Bayesian model averaging (BMA)’s machinery; and outperforms PMSE and BMA. It is shown that if the unconditional model selection probability is equal to model prior, then the proposed approach reduces BMA. The method is illustrated using Bernoulli trials.

Highlights

  • Statistical modeling usually deals with situation in which some quantity of interest is to be estimated from a sample of observations that can be regarded as realizations of some unknown probability distribution

  • Other variants of model selection include Nguefack-Tsague and Ingo [34] who used Bayesian mode averaging (BMA) machinery to derive a focused Bayesian information criterion (FoBMA) which selects different models for different purposes, i.e. their method depends on the parameter singled out for inferences

  • A question is whether Bayesian post-model selection estimator (BPMSE) are consistent, but it is hard to prove because one does not know the priors associated with BPMSEs

Read more

Summary

Introduction

Statistical modeling usually deals with situation in which some quantity of interest is to be estimated from a sample of observations that can be regarded as realizations of some unknown probability distribution. Other frequenstist references dealing with model selection uncertainty include Burnham and Anderson [16], NguefackTsague ([17]-[20]), Zucchini et al [21], Nguefack-Tsague and Zucchini [22], and Zucchini [23]. Other variants of model selection include Nguefack-Tsague and Ingo [34] who used BMA machinery to derive a focused Bayesian information criterion (FoBMA) which selects different models for different purposes, i.e. their method depends on the parameter singled out for inferences. Nguefack-Tsague and Zucchini [35] propose a mixture-based Bayesian model averaging method. The motivation of this paper to raise awareness of the fact that model selection uncertainty is present in Bayesian modeling when interest is focused on frequentist performances of Bayesian post-model selection estimator (BPMSE).

Use x for exploratory data analysis
Bayesian Post-Model-Selection Estimator
Adjusted Bayesian Model Averaging
Prior Model Selection Uncertainty
Posterior Model Selection Uncertainty
Applications
Long Run Evaluation
Evaluation with Integrated Risk
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.