Abstract
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface–shroud resonances support conscious seeing and action, whereas feature–category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure–ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive–emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
Highlights
How conscious resonant dynamics link perception and cognition to actionThis article summarizes a radical departure from the classical view that sensory inputs are transformed via feedforward processes from perception to cognition to action, with little regard for processes of visual attention, memory, learning, decisionmaking, and interpersonal interaction
The CLEARS processes are realized by building upon basic brain designs such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance that will be described below
Resonant states that are not accessible to consciousness, but that dynamically stabilize learned memories, include parietal-prefrontal resonances that trigger the selective opening of basal ganglia gates to enable the readout of contextually appropriate thoughts and actions (Brown, Bullock, & Grossberg, 2004; Buschman & Miller, 2007; Grossberg, 2016b) and entorhinal-hippocampal resonances that dynamically stabilize the learning of entorhinal grid cells and hippocampal place cells during spatial navigation (Grossberg & Pilly, 2014; Kentros, Agniotri, Streater, Hawkins, & Kandel, 2004; Morris & Frey, 1997; Pilly & Grossberg, 2012)
Summary
This article summarizes a radical departure from the classical view that sensory inputs are transformed via feedforward processes from perception to cognition to action, with little regard for processes of visual attention, memory, learning, decisionmaking, and interpersonal interaction. Resonant states that are not accessible to consciousness, but that dynamically stabilize learned memories, include parietal-prefrontal resonances that trigger the selective opening of basal ganglia gates to enable the readout of contextually appropriate thoughts and actions (Brown, Bullock, & Grossberg, 2004; Buschman & Miller, 2007; Grossberg, 2016b) and entorhinal-hippocampal resonances that dynamically stabilize the learning of entorhinal grid cells and hippocampal place cells during spatial navigation (Grossberg & Pilly, 2014; Kentros, Agniotri, Streater, Hawkins, & Kandel, 2004; Morris & Frey, 1997; Pilly & Grossberg, 2012) These resonances do not include feature detectors that are activated by external senses— such as those that support vision or audition—or internal senses—such as those that support emotion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.