Abstract

Listeners show a reliable bias towards interpreting speech sounds in a way that conforms to linguistic restrictions (phonotactic constraints) on the permissible patterning of speech sounds in a language. This perceptual bias may enforce and strengthen the systematicity that is the hallmark of phonological representation. Using Granger causality analysis of magnetic resonance imaging (MRI)- constrained magnetoencephalography (MEG) and electroencephalography (EEG) data, we tested the differential predictions of rule-based, frequency–based, and top-down lexical influence-driven explanations of processes that produce phonotactic biases in phoneme categorization. Consistent with the top-down lexical influence account, brain regions associated with the representation of words had a stronger influence on acoustic-phonetic regions in trials that led to the identification of phonotactically legal (versus illegal) word-initial consonant clusters. Regions associated with the application of linguistic rules had no such effect. Similarly, high frequency phoneme clusters failed to produce stronger feedforward influences by acoustic-phonetic regions on areas associated with higher linguistic representation. These results suggest that top-down lexical influences contribute to the systematicity of phonological representation.

Highlights

  • Analyses of effective connectivity focused on the interval between 200 and 400 ms after stimulus onset. We selected this interval based on evidence that listeners show electrophysiological sensitivity to native phonotactic violations in this time period [62,63] We used Granger analysis techniques to examine patterns of effective connectivity in this time period in trials involving acoustically unambiguous tokens. We chose these tokens to minimize the influence of dynamics related to perceptual ambiguity and to isolate dynamics more directly attributable to phonotactic processes

  • Regions of interest (ROIs) were identified automatically using an algorithm that identified clusters of vertices associated with activation peaks showing common temporal activation patterns, and compared the time course of all clusters to eliminate regions of interest (ROIs) that provided redundant information

  • This analysis was based on all trials so that we could directly compare the strength of interactions between a common set of ROIs supporting phonotactically consistent versus inconsistent responses

Read more

Summary

Introduction

The lawful patterning of speech sounds to form syllables and words is described by systematic prohibitions on the sequencing of phonemes termed phonotactic constraints These constraints inform the intuition that doke could be an English word, but lteg could not [1]. Recent simulation results [7] demonstrate that regularization biases have a cumulative effect as the biased percepts of one generation influence the perceptual models that are passed on to the next. Recent simulation results [7] demonstrate that regularization biases have a cumulative effect as the biased percepts of one generation influence the perceptual models that are passed on to the In this way, perceptual biases are a factor in regularizing the phonotactic structure of languages. In this paper we examine the dynamic neural processes that support phonotactic repair

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call