Abstract

Researchers have demonstrated that visual and auditory cues interact, improving speech intelligibility under noisy listening conditions. For instance, recent findings demonstrated that simulated cataracts hinder the ability of listeners to utilize visual cues to understand (i.e., speechread) televised speech sentences. The purpose of this study was to determine which measures of visual, auditory, and cognitive performance predicted participants’ ability to speechread televised spoken messages in the presence of background babble. Specifically, 30 young adults with normal visual acuity and hearing sensitivity completed a battery of visual, auditory, and cognitive assessments. Speech intelligibility was tested under two conditions: auditory-only with no visual input and auditory-visual with normal viewing. Speech intelligibility scores were used to calculate average visual enhancement, or the average benefit participants gained from viewing visual information in addition to auditory information. Regression analyses demonstrated that the best predictors of visual enhancement were measures of contrast sensitivity and executive functioning, including the Digit Symbol Substitution Test and Trail Making Test, part B. These results suggest that audiovisual speech integration is dependent on both low-level sensory information and high-level cognitive processes, particularly those associated with executive functioning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.