Abstract

In classical Bayesian inference the prior is treated as fixed, it is asymptotically negligible, thus any information contained in the prior is ignored from the asymptotic first order result. However, in practice often an informative prior is summarized from previous similar or the same kind of studies, which contains non-negligible information for the current study. Here, different from traditional Bayesian point of view, we treat such prior to be non-fixed. In particular, we give the data sizes used in previous studies for the prior the same status as the size of the current dataset, viewing both sample sizes as increasing to infinity in the asymptotic study. Thus the prior is asymptotically non-negligible, and its original effects are ressumed under this view. Consequently, Bayesian inference using such prior is more efficient, as it should be, than that regarded under the existing setting. We study some basic properties of Bayesian estimators using such priors under convex losses and the 0—1 loss, and illustrate the method by an example via simulation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.