Abstract

The cognitive mechanisms underlying statistical learning are engaged for the purposes of speech processing and language acquisition. However, these mechanisms are shared by a wide variety of species that do not possess the language faculty. Moreover, statistical learning operates across domains, including nonlinguistic material. Ancient mechanisms for segmenting continuous sensory input into discrete constituents have evolved for general-purpose segmentation of the environment and been readopted for processing linguistic input. Linguistic input provides a rich set of cues for the boundaries between sequential constituents. Such input engages a wider variety of more specialized mechanisms operating on these language-specific cues, thus potentially reducing the role of conditional statistics in tokenizing a continuous linguistic stream. We provide an explicit within-subject comparison of the utility of statistical learning in language versus nonlanguage domains across the visual and auditory modalities. The results showed that in the auditory modality statistical learning is more efficient with speech-like input, while in the visual modality efficiency is higher with nonlanguage input. We suggest that the speech faculty has been important for individual fitness for an extended period, leading to the adaptation of statistical learning mechanisms for speech processing. This is not the case in the visual modality, in which linguistic material presents a less ecological type of sensory input.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call