Abstract

We investigated the role of the syllable during speech processing in German, in an auditory-auditory fragment priming study with lexical decision and simultaneous EEG registration. Spoken fragment primes either shared segments (related) with the spoken targets or not (unrelated), and this segmental overlap either corresponded to the first syllable of the target (e.g., /teis/ – /teisti/), or not (e.g., /teis/ – /teistləs/). Similar prime conditions applied for word and pseudoword targets. Lexical decision latencies revealed facilitation due to related fragments that corresponded to the first syllable of the target (/teis/ – /teisti/). Despite segmental overlap, there were no positive effects for related fragments that mismatched the first syllable. No facilitation was observed for pseudowords. The EEG analyses showed a consistent effect of relatedness, independent of syllabic match, from 200 to 500 ms, including the P350 and N400 windows. Moreover, this held for words and pseudowords that differed however in the N400 window. The only specific effect of syllabic match for related prime—target pairs was observed in the time window from 200 to 300 ms. We discuss the nature and potential origin of these effects, and their relevance for speech processing and lexical access.

Highlights

  • In a familiar language, listeners perceive speech as a sequence of discrete and meaningful units, though the spoken input consists of a continuous and often noisy signal

  • We investigated the role of the syllable during speech processing in German, in an auditory-auditory fragment priming study with lexical decision and simultaneous EEG registration

  • If syllables play a role in German speech perception, related primes that precisely match the initial syllable, as in /lus/ – /lus.tig/, and /lust/ – /lust.los/

Read more

Summary

Introduction

Listeners perceive speech as a sequence of discrete and meaningful units, though the spoken input consists of a continuous and often noisy signal. Speakers provide few reliable cues on how to organize this continuous signal into units of meaning. Speech is highly variable between, and even within, speakers. A question that is still not fully resolved is how this variable and noisy input is mapped onto word forms and meaning. One idea is that the input is mapped onto stored sublexical units, which aid access to lexical representations of word form. Among the candidates proposed as mediators between the acoustic input and the lexicon, two have received special attention: phonemes and syllables (Cutler et al, 1986; Dumay et al, 2002; Zwitserlood, 2004)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.