Abstract

Abstract Bayesians since Savage ([1972]) have appealed to asymptotic results to counter charges of excessive subjectivity. Their claim is that objectionable differences in prior probability judgements will vanish as agents learn from evidence, and individual agents will converge to the truth. Glymour ([1980]), Earman ([1992]) and others have voiced the complaint that the theorems used to support these claims tell us, not how probabilities updated on evidence will actually behave in the limit, but merely how Bayesian agents believe they will behave, suggesting that the theorems are too weak to underwrite notions of scientific objectivity and intersubjective agreement. I investigate, in a very general framework, the conditions under which updated probabilities actually converge to a settled opinion and the conditions under which the updated probabilities of two agents actually converge to the same settled opinion. I call this mode of convergence ‘deterministic’, and derive results that extend theorems in (Huttegger [2015b]). The results here lead to a simple characterization of deterministic convergence for Bayesian learners and give rise to an interesting argument for what I call ‘strong regularity’, the view that probabilities of non-empty events should be bounded away from zero. 1Introduction2Preliminaries3Deterministic Convergence4Consensus in the Limit5Strong Regularity6ConclusionAppendix

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.