Higher cognitive functioning is assumed to be largely symbolic/representational and compositional in nature. At various processing stages, from perceptual to motor, discrete structural elements with intricate temporal dependencies are combined into increasingly complex constructs [1]. Mapping such complex computational processes to the underlying neuronal infrastructure and assessing the properties of the neuronal system responsible for their implementation is not straightforward, but it is likely to yield important insights into the nature of neural computation. In order to address these issues, we adopt ideas and formalisms developed by theoretical linguistics to study the nature of rule-like or compositional behavior in the language domain, namely the acquisition of formal (artificial) grammars. The Artificial Grammar Learning (AGL) paradigm has a long tradition in psycholinguistic research (see, e.g. [2] for an overview), as a means to study the nature of syntactic processing and implicit sequence learning. With mere exposure and without performance feedback, human beings implicitly acquire knowledge about the structural regularities implemented by complex rule systems. In this work, we investigate to which extent generic cortical microcircuits can support formally explicit symbolic computations, instantiated by the same grammars used in the human AGL literature and implementing various types of local and non-adjacent dependencies between the sequence elements, thus requiring varying degrees of computational complexity and online processing memory to be adequately learned. We use concrete implementations of input-driven recurrent networks composed of noisy, spiking neurons, built according to the reservoir computing framework and dynamically shaped by a variety of synaptic and intrinsic plasticity mechanisms operating concomitantly [3]. Additionally, we compare supervised and unsupervised learning rules for the decoding algorithms, with varying degrees of biological plausibility. We show that, when shaped by plasticity, these models are capable of acquiring the structure of simple (regular) grammars. When asked to judge string legality (in a manner similar to human subjects), the networks perform at a qualitatively comparable level. We uncover which plasticity mechanisms are crucial for the task, with the aim of specifying a minimal model. Furthermore, the capability of the networks to process (bounded) recursive constructions including multiple patterns of non-adjacent dependencies accurately reflects recent results of human performance, highlighting inherent limitations imposed by the nature of neuronal processing.
Read full abstract