Abstract

Using a dual-task paradigm, 2 experiments (Experiments 1 and 2) were conducted to assess differences in the amount of listening effort expended to understand speech in noise in audiovisual (AV) and audio-only (A-only) modalities. Experiment 1 had equivalent noise levels in both modalities, and Experiment 2 equated speech recognition performance levels by increasing the noise in the AV versus A-only modality. Sixty adults were randomly assigned to Experiment 1 or Experiment 2. Participants performed speech and tactile recognition tasks separately (single task) and concurrently (dual task). The speech tasks were performed in both modalities. Accuracy and reaction time data were collected as well as ratings of perceived accuracy and effort. In Experiment 1, the AV modality speech recognition was rated as less effortful, and accuracy scores were higher than A only. In Experiment 2, reaction times were slower, tactile task performance was poorer, and listening effort increased, in the AV versus the A-only modality. At equivalent noise levels, speech recognition performance was enhanced and subjectively less effortful in the AV than A-only modality. At equivalent accuracy levels, the dual-task performance decrements (for both tasks) suggest that the noisier AV modality was more effortful than the A-only modality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.