Abstract

The role of working memory (WM) and long-term lexical-semantic memory (LTM) in the perception of interrupted speech with and without visual cues, was studied in 29 native English speakers. Perceptual stimuli were periodically interrupted sentences filled with speech noise. The memory measures included an LTM semantic fluency task, verbal WM, and visuo-spatial WM tasks. Whereas perceptual performance in the audio-only condition demonstrated a significant positive association with listeners' semantic fluency, perception in audio-video mode did not. These results imply that when listening to distorted speech without visual cues, listeners rely on lexical-semantic retrieval from LTM to restore missing speech information.

Highlights

  • In real-life adverse listening situations, auditory information alone is not adequate to achieve ideal speech intelligibility

  • As an extension to previous research, in the present study we aimed to address the role of WMC and lexicalsemantic lexical-semantic memory (LTM) retrieval ability in processing visual cues when recognizing auditorily distorted sentences

  • The results of a repeated sample t-test revealed that participants performed significantly better in the audio-video condition than the audio-only condition, t(28) 1⁄4 13.83, p < 0.001

Read more

Summary

Introduction

In real-life adverse listening situations, auditory information alone is not adequate to achieve ideal speech intelligibility. Visual cues congruent with auditory information provide compensatory cues for speech understanding.. Compensatory visual cues are manifested in several ways. Seeing a speaker’s face enables us to effectively track the speech source. Visual cues provide articulatory gestures such as tongue height and tongue movement for vowels and place of articulation for consonants.. The temporal continuity of the audio-visual stimulus aids the segmentation of continuous speech into individual words.. Visual cues add to supra-segmental information, including intonation, stress, and rhythmic properties of the auditory stimuli. Visual cues provide articulatory gestures such as tongue height and tongue movement for vowels and place of articulation for consonants. Third, the temporal continuity of the audio-visual stimulus aids the segmentation of continuous speech into individual words. visual cues add to supra-segmental information, including intonation, stress, and rhythmic properties of the auditory stimuli.

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call