Abstract

We humans carry around in our heads rich internal mental models that constitute our construction of the world, and the relation of that world to us. These models can be expressed at multiple levels of abstraction, including beliefs about sensory stimuli and the output of our motor programs, or higher-level beliefs about self. Advancing our understanding of the brain’s internal processing states, however challenging, could lead to breakthroughs in understanding states such as dreaming, consciousness, and mental disorders (1). Computational theories propose that internal models are broadcast throughout the brain, including to sensory areas of the brain (2). Techniques to read out internal mental models will deliver insight into how our brains use beliefs or predictions to “construct” the environment. Internal models in higher brain areas are complex though, creating a paradox that they are difficult to constrain if we want to decipher the brain’s coding of them. However, when internal models are fed back to sensory cortex, we assume that they are translated into sensory predictions that would intuitively have simpler content. In PNAS, Chong et al. (3) provide empirical evidence needed to drive this theory forward. Using brain reading, they show that the brain constructs new plausible predictions of expected sensory input, and that these predictions can be read out in sensory cortex.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.