Abstract

AbstractWenmackers and Romeijn [38] formalize ideas going back to Shimony [33] and Putnam [28] into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.

Highlights

  • On the standard philosophical conception of Bayesian learning, an agent starts out with a particular prior distribution and learns by conditionalizing on the data it receives

  • If the true hypothesis is among the formulated hypotheses, is or becomes part of the open-minded Bayesian’s hypothesis set, is the agent from that point on still guaranteed to almost surely converge on this truth? That is the question we investigate in this paper

  • We focus on the property of weak merger with the true hypothesis, and show that all proposed versions of open-minded Bayesians, unlike the standard Bayesian, fail to guarantee this property

Read more

Summary

Introduction

On the standard philosophical conception of Bayesian learning, an agent starts out with a particular prior distribution and learns by conditionalizing on the data it receives. Central to Shimony’s account is an idea he traces back to Putnam (1963; see Shimony, 1970, 89; 1969, 2), and in more veiled form to Jeffreys (1961; see Shimony, 1970, 97ff; Howson, 1988) This is the idea that, rather than taking as starting point an hypothesis set that is as wide as possible, Bayesian inference is relative to a limited set of “seriously proposed hypotheses,” that is dynamically expanded as new such hypotheses are proposed. We should emphasize that Wenmackers and Romeijn in their paper (and we in this paper) are concerned with the question of how to incorporate externally proposed new hypotheses: their proposals are attempts to make this aspect part of a Bayesian logic of inductive inference. We are here only concerned with the former, but presume, with Wenmackers and Romeijn, that the scope of mere calculation may be slightly extended, to the procedure of incorporating given new hypotheses into your model

The open-minded Bayesians
The failure of truth-convergence
Forward-looking open-minded truth-convergence
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.