How does the human brain link relational concepts to perceptual experience? For example, a speaker may say "the cup to the left of the computer" to direct the listener's attention to one of two cups on a desk. We provide a neural dynamic account for both perceptual grounding, in which relational concepts enable the attentional selection of objects in the visual array, and for the generation of descriptions of the visual array using relational concepts. In the model, activation in neural populations evolves dynamically under the influence of both inputs and strong interaction as formalized in dynamic field theory. Relational concepts are modeled as patterns of connectivity to perceptual representations. These generalize across the visual array through active coordinate transforms that center the representation of target objects in potential reference objects. How the model perceptually grounds or generates relational descriptions is probed in 104simulations that systematically vary the spatial and movement relations employed, the number of feature dimensions used, and the number of matching and nonmatching objects. We explain how sequences of decisions emerge from the time- and state-continuous neural dynamics, how relational hypotheses are generated and either accepted or rejected, followed by the selection of new objects or the generation of new relational hypotheses. Its neural realism distinguishes the model from information processing accounts, its capacity to autonomously generate sequences of processing steps distinguishes it from deep neural network accounts. The model points toward a neural dynamic theory of highercognition.
Read full abstract