Abstract

There is a growing body of evidence that the human brain may be organized according to principles of predictive processing. An important conjecture in neuroscience is that a brain organized in this way can effectively and efficiently approximate Bayesian inferences. Given that many forms of cognition seem to be well characterized as a form of Bayesian inference, this conjecture has great import for cognitive science. It suggests that predictive processing may provide a neurally plausible account of how forms of cognition that are modeled as Bayesian inference may be physically implemented in the brain. Yet, as we show in this paper, the jury is still out on whether or not the conjecture is really true. Specifically, we demonstrate that each key subcomputation invoked in predictive processing potentially hides a computationally intractable problem. We discuss the implications of these sobering results for the predictive processing account and propose a way to move forward.

Highlights

  • The predictive processing account is becoming increasingly popular as an account of perceptual, behavioral, and neural phenomena in cognitive neuroscience

  • In line with the literature (Gigerenzer 2008; Bossaerts and Murawski 2017; Frixione 2001; Parberry 1994; Thagard and Verbeurgt 1998; Tsotsos 1990; van Rooij et al 2019), we will adopt NP-hardness as a formalization of the notion of “intractability.” A computation that is NP-hard cannot3 be computed in so-called polynomial time, i.e., a time taking on the order of nc steps, where n is a measure of the input size (e.g., n may the number of the nodes in the Bayesian network), and c is a constant

  • Most Bayesian computations are NP-hard, both to compute exactly and to approximate (Kwisthout and van Rooij 2013a; Kwisthout 2015); we show that “low prediction error” is not sufficient to render the computations in predictive processing tractable

Read more

Summary

Introduction

The predictive processing account is becoming increasingly popular as an account of perceptual, behavioral, and neural phenomena in cognitive neuroscience. BELIEF-UPDATING (SUM) Instance: A causal Bayesian network BL with designated variable subsets Hyp and Pred, a probability distribution Pr(Pred) over Pred, and a prediction error δ(Obs,Pred). As in the SUM variants, we make a distinction in HYPOTHESIS-UPDATING between BELIEF-UPDATING, where we compute the most probable hypothesis given the observation (taking the prior probabilities of the hypotheses into consideration), and BELIEF-REVISION, where we compute the most likely hypothesis that would minimize prediction error. BELIEF-UPDATING (MAX) Instance: A causal Bayesian network BL with designated variable subsets Hyp and Pred, a joint value assignment p to Pred, and the prediction error d(p , p) such that p = p + d(p , p). The remaining computational problems are similar to their SUM variants: MODEL-REVISION (MAX) Instance: As in BELIEF-UPDATING; a set P ⊂ PrBL of parameter probabilities. Output: An intervention a to the variables in A such that dH[a] is minimal

Intractability Results
Result
Tractability Results
A Note on Approximation
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call