Abstract

We propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds (or more generally, truth in all the worlds that are plausible enough). We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: (1) learning observable evidence obtained by repeated sampling from the unknown distribution; and (2) learning higher-order information about the distribution. The first changes only the plausibility map (via a ‘plausibilistic’ version of Bayes’ Rule), but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their plausibility ordering.. We look at stability of beliefs under either of these types of learning, defining two related notions (safe belief and statistical knowledge), as well as a measure of the verisimilitude of a given plausibility model. We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense (statistical) knowledge of that true probability. Finally, we sketch the contours of a dynamic doxastic logic for statistical learning.

Highlights

  • We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense knowledge of that true probability

  • The goal of this paper is to propose a new model for learning a probabilistic distribution, in situations that are commonly characterized as those of “radical uncertainty” (Walley 1996) or “Knightian uncertainty” (Cerreia-Vioglio et al 2013)

  • We studied forming beliefs about unknown probabilities in situations that are commonly described as those of radical uncertainty

Read more

Summary

Introduction

The goal of this paper is to propose a new model for learning a probabilistic distribution, in situations that are commonly characterized as those of “radical uncertainty” (Walley 1996) or “Knightian uncertainty” (Cerreia-Vioglio et al 2013). Even a good measurement by weighting will leave open a whole interval of possible biases In this sense, a combination of observations and higher-order information will not in general allow the agent to come to know the correct distribution, in the standard (‘infallible’) sense in which the term knowledge is used in doxastic and epistemic logics. The second type of evidence (higher-order information about the distribution) induces a more familiar kind of update: the distributions that do not satisfy the new information (typically given in the form of linear inequalities) are eliminated, beliefs are formed as before by focusing on the most plausible remaining distributions This form of revision is known as AGM conditioning in Belief Revision Theory (Alchourrón et al 1985), and as update or “public announcement” in Logic (Baltag and Renne 2016; van Ditmarsch et al 2007), and satisfies all the standard AGM axioms..

Preliminaries and notation
Probabilistic plausibility models
Cautious plausibility
Evidence-based plausibility
Centered plausibility
Plausibility based on second-order probability
Belief is consistent
Conditioning and belief dynamics
Tracking the truth
Towards a logic of statistical learning
Conclusion and comparison with other work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call