Abstract

AbstractThis chapter examines the difficulties self-locating degrees of belief cause for traditional Bayesian modeling frameworks. It argues that these problems are caused not by Bayesianism’s view of the contents to which degrees of belief are assigned, but instead by the way Conditionalization (the traditional Bayesian updating norm) interacts with context-sensitive claims. Solving these problems requires us first to understand the interactions between formal models of the same story based on different modeling languages. A Proper Expansion Principle (PEP) is proposed that governs such interactions. Combining PEP with Generalized Conditionalization (an updating rule defended in previous chapters) yields the Certainty-Loss Framework (CLF), a formal framework that correctly models rational requirements in stories involving context-sensitivity. The version of PEP presented in this chapter improves on earlier versions proposed by the author, which were susceptible to a counterexample invented by Sarah Moss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call