Abstract

Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.

Highlights

  • Speech understanding requires the interplay of top–down and bottom–up processes

  • Speech intelligibility data was not collected in this study and speech intelligibility levels for amplitude modulated noise (AMN) as well as for international speech test signal (ISTS) are based on the equalization data obtained from 10 subjects prior to the current study

  • Since sentence question (SQ) might be considered a measure of speech recognition in the sense that the question probes that the sentence was heard, even if the three-choice procedure facilitates performance by giving possible answer alternatives as well as having a chance level of 33%, the results suggested that the general speech intelligibility levels were at the expected levels above 91% (Rönnberg et al, 2014)

Read more

Summary

Introduction

Speech understanding requires the interplay of top–down and bottom–up processes. Top–down processes include cognitive abilities that allow speech perception and comprehension (Davis and Johnsrude, 2007; Besser et al, 2013), while bottom–up processes include the perception of sound and the ability to hear. Listening can be viewed as a higher order function that requires intention and attention (Kiessling et al, 2003; PichoraFuller and Singh, 2006). Listening is required when heard information is to be processed for comprehension and to be remembered. The processes involved in listening, intention and attention, load on cognitive resources and demand expenditure of effort (Kiessling et al, 2003; Pichora-Fuller and Singh, 2006)

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call