Abstract

Previous studies have found that perception in older people benefits from multisensory over unisensory information. As normal speech recognition is affected by both the auditory input and the visual lip movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio–visual integration is affected by top-down semantic processing. We presented participants with audio–visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio–visual ‘blur’ compared to audio–visual ‘no blur’ condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

Highlights

  • The current study investigated the effect of aging on audio–visual speech perception

  • We manipulated the reliability of sensory information in an audio–visual video display of an actor articulating sentences by either blurring (AV blur) the image or not (AV no blur) and we manipulated the semantic content by presenting either meaningful or non-meaningful sentences

  • In terms of sentence recall performance, younger adults were better at the task than older adults and for both groups, meaningful sentences were more accurately recalled than non-meaningful sentences

Read more

Summary

Introduction

Perception in the everyday world is rarely based on inputs from one sensory modality (Stein and Meredith, 1993; Shimojo and Shams, 2001; Shams and Seitz, 2008; Spence et al, 2009) and the integration of multiple sensory cues can both disambiguate the perception of, and speed up reaction to, external stimuli (Stein et al, 1989; Schröger and Widmann, 1998; Bolognini et al, 2005) This multisensory enhancement (ME) is most likely to occur when two or more sensory stimuli correspond with one another both spatially and temporally (Bolognini et al, 2005; Holmes and Spence, 2005; Senkowski et al, 2007). Jääskeläinen et al (2004) later provided further evidence for this effect using whole-head magnetoencephalograpy (MEG) They found that an N100m response in the auditory cortex, which was consistently evoked approximately 100 ms following an auditory speech input, decreased in amplitude when auditory input was preceded by visual input compared to when it was not. Visual information in speech may induce activation in the same neurons in the auditory cortex which are responsible for processing phonetic information (e.g., Besle et al, 2004). Davis et al (2008) suggest that the additional visual information may decrease the subsequent processing load on the auditory cortex during speech perception

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call