Abstract

According to approximate Bayesianism, Bayesian norms are ideal norms worthy of approximation for non-ideal agents. This paper discusses one potential challenge for approximate Bayesianism: in non-transparent learning situations—situations where the agent does not learn what they have or have not learnt—it is unclear that the Bayesian norms are worth satisfying, let alone approximating. I discuss two replies to this challenge and find neither satisfactory. I suggest that what transpires is a general tension between approximate Bayesianism and the possibility of “non-ideal” epistemic situations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call