Abstract

ABSTRACT What norm governs how an agent should change their beliefs when they encounter a completely new possibility? Orthodox Bayesianism has no answer, as it takes all learning to involve updating prior beliefs. A partial proposal is Reverse Bayesianism, which mandates the preservation of ratios of prior probabilities, but it faces counterexamples introduced by Mahtani (2021). I propose to separate awareness growth into two stages: awareness revision and belief extension. I argue that Mahtani’s cases highlight that we need to theorize awareness revision before we can define a proposal for belief extension, such as Reverse Bayesianism. I provide a formal model of awareness revision which makes explicit how propositions are distinguished within awareness states and identified across them. Reformulating Reverse Bayesianism to take input from my model allows it to navigate Mahtani-style cases. My model leaves open how agents choose to identify propositions across awareness states, and I propose that they ought to do so conservatively: preserving undisturbed prior reasoning about the structure of their awareness. I then spell out this proposal in a special case. This is a partial proposal, and I close with a discussion of how to elaborate on it and how to advance research into awareness revision.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.