Abstract
When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a theoretical framework in which biases and variability emerge from a trade-off between Bayesian inference and the cognitive cost of carrying out probabilistic computations. We consider two forms of the cost: a precision cost and an unpredictability cost; these penalize beliefs that are less entropic and less deterministic, respectively. We apply our framework to the case of a Bernoulli variable: the bias of a coin is inferred from a sequence of coin flips. Theoretical predictions are qualitatively different depending on the form of the cost. A precision cost induces overestimation of small probabilities, on average, and a limited memory of past observations, and, consequently, a fluctuating bias. An unpredictability cost induces underestimation of small probabilities and a fixed bias that remains appreciable even for nearly unbiased observations. The case of a fair (equiprobable) coin, however, is singular, with non-trivial and slow fluctuations in the inferred bias. The proposed framework of costly Bayesian inference illustrates the richness of a ‘resource-rational’ (or ‘bounded-rational’) picture of seemingly irrational human cognition.
Highlights
While the faculty of rational thinking defines, at least to an extent, our human nature, it suffers from a remarkably long list of so-called ‘cognitive biases’—systematic deviations from rational information processing and behavior [1]
We emphasize that our aim, here, is not to claim that humans carry out the altered Bayesian inference that precisely matches our mathematical formulation, presented below, nor that our prescription fits behavioral data better than another model; rather, we would like to present the idea of regularized Bayesian inference in which the trade-off emerges from a cognitive constraint as a possible, additional ingredient among a number of rationalizations of cognitive biases
We examine the behavior of a model subject whose inference is regularized by the unpredictability cost or by the precision cost, when the coin is fair, i.e., when p = 1/2
Summary
While the faculty of rational thinking defines, at least to an extent, our human nature, it suffers from a remarkably long list of so-called ‘cognitive biases’—systematic deviations from rational information processing and behavior [1]. A notable category of biases comprises those that govern the way we manipulate probabilistic quantities: these biases affect our inference of the probability of events, our decision-making process, and, more generally, our behavior in situations where stimuli obey (seeming or unknown) stochastic rules [4,5,6,7,8,9,10,11,12] In such situations, human subjects violate a normative predicament viewed by the experimenter as the rational one (e.g., “maximize the number of correct responses per unit time in a given task”) [13]. Our study proposes a fresh theoretical understanding of the biases that the humans exhibit when carrying out inferences about probabilities
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have