Abstract

The existing models of Bayesian learning with multiple priors by Marinacci (Stat Pap 43:145–151, 2002) and by Epstein and Schneider (Rev Econ Stud 74:1275–1303, 2007) formalize the intuitive notion that ambiguity should vanish through statistical learning in an one-urn environment. Moreover, the multiple priors decision maker of these models will eventually learn the “truth.” To accommodate nonvanishing violations of Savage’s (The foundations of statistics, Wiley, New York, 1954) sure-thing principle, as reported in Nicholls et al. (J Risk Uncertain 50:97–115, 2015), we construct and analyze a model of Bayesian learning with multiple priors for which ambiguity does not necessarily vanish in an one-urn environment. Our decision maker only forms posteriors from priors that survive a prior selection rule which discriminates, with probability one, against priors whose expected Kullback–Leibler divergence from the “truth” is too far off from the minimal expected Kullback–Leibler divergence over all priors. The “stubbornness” parameter of our prior selection rule thereby governs how much ambiguity will remain in the limit of our learning model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.