Abstract

BackgroundHow do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. However, there are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. In the present study, word-stress was manipulated with statistical information and placed in different syllables within trisyllabic nonsense words to explore the result of the combination of the cues in an online word segmentation task.ResultsThe behavioral results showed that words were segmented better when stress was placed on the final syllables than when it was placed on the middle or first syllable. The electrophysiological results showed an increase in the amplitude of the P2 component, which seemed to be sensitive to word-stress and its location within words.ConclusionThe results demonstrated that listeners can integrate specific prosodic and distributional cues when segmenting speech. An ERP component related to word-stress cues was identified: stressed syllables elicited larger amplitudes in the P2 component than unstressed ones.

Highlights

  • IntroductionThe physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task

  • How do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task

  • The only way to identify the embedded words from the continuous speech stream was by tracking the regular positions of each syllable along the sequence, a computational process which is operative at the early age of 8 months [1]

Read more

Summary

Introduction

The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. There are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. Following the paradigm introduced by Saffran et al [1], we exposed adult volunteers to an artificial language while recording event-related brain potentials. After this learning phase, participants were asked to recognize the nonsense words of this artificial language. A specific feature of this language was that no pauses or other potential cues signaling word boundaries were provided. The only way to identify the embedded words from the continuous speech stream was by tracking the regular positions of each syllable along the sequence, a computational process (statistical learning) which is operative at the early age of 8 months [1]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.