Abstract

The idea that the brain learns generative models of the world has been widely promulgated. Most approaches have assumed that the brain learns an explicit density model that assigns a probability to each possible state of the world. However, explicit density models are difficult to learn, requiring approximate inference techniques that may find poor solutions. An alternative approach is to learn an implicit density model that can sample from the generative model without evaluating the probabilities of those samples. The implicit model can be trained to fool a discriminator into believing that the samples are real. This is the idea behind generative adversarial algorithms, which have proven adept at learning realistic generative models. This paper develops an adversarial framework for probabilistic computation in the brain. It first considers how generative adversarial algorithms overcome some of the problems that vex prior theories based on explicit density models. It then discusses the psychological and neural evidence for this framework, as well as how the breakdown of the generator and discriminator could lead to delusions observed in some mental disorders.

Highlights

  • Our sensory inputs are impoverished, and yet our experience of the world feels richly detailed

  • How can we explain the peculiar ways that the inferential apparatus breaks down? In particular, how can we understand the origins of delusions, hallucinations, and confabulations that arise in certain mental disorders? While Bayesian models have been developed to explain these phenomena, they fall short in certain ways that we discuss later on. We argue that these issues can be addressed by thinking about Bayesian inference from a different algorithmic perspective

  • Breakdown of the generator and discriminator may explain the origin of false beliefs and percepts in certain mental disorders: a dysfunctional generator can produce abnormal content, and a dysfunctional discriminator can endorse that content as real. This “generative-adversarial” interplay is motivated by recent advances in machine learning, which have produced algorithms for learning generative models based on the same idea

Read more

Summary

INTRODUCTION

Our sensory inputs are impoverished, and yet our experience of the world feels richly detailed. Breakdown of the generator and discriminator may explain the origin of false beliefs and percepts in certain mental disorders: a dysfunctional generator can produce abnormal content, and a dysfunctional discriminator can endorse that content as real. This “generative-adversarial” interplay is motivated by recent advances in machine learning, which have produced algorithms for learning generative models based on the same idea. What follows is a rampantly speculative discussion of implications for psychology and neuroscience (note that the article is not proposing any novel computational ideas from the perspective of machine learning) We apply these ideas to understanding delusions observed in some mental disorders

GENERATIVE MODELS
The Puzzle of Phenomenology
Discriminating Between Reality and Imagination
NEURAL IMPLICATIONS
DELUSIONS
DISCUSSION
Learning From the Imagination
Toward a Synthesis of Approximate Inference Algorithms
Predictions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.