Abstract

Audiovisual (AV) cues often lead to improved speech understanding, especially when the auditory input is impoverished (e.g., in background noise, for people with hearing loss). This AV improvement occurs when both the auditory and visual information can be integrated together effectively (e.g., through temporal synchrony). However, whether the complexity of speech signals influences this integrated percept and how this contributes to speech understanding is still unknown. This study investigates whether individual differences in perception of temporal synchrony of the auditory and visual speech signal explain individual variability in AV speech recognition, and whether this differs across different levels of linguistic processing. By implementing five different levels of linguistic complexity (i.e., varying phoneme-viseme connections, vocabulary knowledge and/or linguistic context) and introducing temporal asynchrony in a remote task with English speaking adults we ask the following questions: (1) Is temporal synchrony needed to observe AV speech perception benefits? (2) Does this vary by linguistic level? (3) Does linguistic complexity influence synchrony perception? (4) And does synchrony perception explain individual variability in AV speech perception? These findings add value in deciding how AV synchronization should be balanced in hearing aid designs as well as in augmented / virtual reality devices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.