Abstract

Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the visual domain by generating abstract visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial “grammars” (rule sets) at three key complexity levels. Because human linguistic syntax is classed as “mildly context-sensitive,” we specifically included a visual grammar at this complexity level. Acquisition of these three grammars was tested in an artificial grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract visual patterns with no meaning.

Highlights

  • Because different pattern-generating rule systems can be objectively ranked using the mathematical framework of formal language theory (Jäger and Rogers, 2012), patterns of success or failure can be used to evaluate the abilities of different species or human populations to recognize and generalize rules at different levels of complexity (Fitch and Friederici, 2012; Wilson et al, 2013) along with the brain circuitry used to process different types of patterns (Friederici et al, 2006; Pulvermüller, 2010)

  • Similarities in human pattern processing across different sensory domains suggest that pattern-processing abilities generalize across domains and modalities and are not specific to language (Saffran et al, 1999, 2007)

  • This is consistent with a voluminous literature on artificial grammar learning” (AGL) dating back to Arthur Reber’s work (Reber, 1967) where use of the term “grammar” by itself carries no implications about the relevance of the rule system to human language

Read more

Summary

Introduction

Recent years have seen the rise of a new approach to investigating higher cognition in humans and other animals, the ability to recognize patterns of various types and complexity (Saffran et al, 1996; Marcus et al, 1999; Fitch and Friederici, 2012; ten Cate and Okanoya, 2012) These studies have examined patterns at different levels of complexity (Fitch and Hauser, 2004; Uddén et al, 2012), across different sensory and cognitive domains including spoken, musical or visual stimuli (Saffran et al, 1999, 2007), across different categories of humans [e.g., infants, normal adults, or patients (Reber and Squire, 1999; Saffran et al, 1999)], and across different species of birds and mammals (e.g., Gentner et al, 2006; Murphy et al, 2008; Stobbe et al, 2012; Wilson et al, 2013; Sonnweber et al, 2015). The degree to which pattern perception is domain-specific or modality-general remains debated (Frost et al, 2015), as does the degree to which failures of nonhuman species to master certain rule types truly reflects cognitive limitations at the rule-learning level (ten Cate and Okanoya, 2012)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call