Abstract

The most common complaint of a person with hearing loss is difficulty in understanding speech in noisy environments. Although communicating in noisy environments can prove challenging to people with normal hearing, the brain uses acoustic cues from a complex sound mixture to separate the speech signal of interest from competing sounds, a process known as auditory scene analysis. In a cocktail party, we can thus identify and concentrate on the speech of the person with whom we are speaking from the competing speech of other talkers, or follow a conversation among multiple talkers when seated at a table in a noisy restaurant. Although it has long been known that people with hearing loss have greater difficulty decoding speech in noisy situations (with or without hearing aids), little was known about how hearing loss or how the signal processing in hearing prostheses (hearing aids or cochlear implants) might affect specific aspects of listening thought to be important to auditory scene analysis. This issue of Trends in Amplification contains a series of articles that focus on aspects of listening in complex environments by persons with hearing loss, the effect of hearing aids and cochlear implants on listener performance in difficult listening environments, and how information about the mechanisms underlying auditory scene analysis might be used to improve the design of hearing aids and cochlear implants. In the first article, Shinn-Cunningham and Best focus on the role of selective attention in complex listening situations, that is, the “cocktail party” problem. They provide a description and explanation of why and how listeners with normal hearing are able to use attention to separate the wanted signal from a mixture of multiple voices, and they explain how the degradation of “bottom-up” cues because of hearing loss might affect auditory object formation and the ability to use selective attention. Finally, the authors address how hearing aids might help or hinder listening in complex listening environments. In the second article, Marrone, Mason, and Kidd report the results of a study investigating the ability of hearing aid users to benefit from spatial separation of the target talker and competing talkers (spatial release from masking) in reverberant environments. Listeners with hearing loss were tested with and without their personal hearing aids (unilateral and bilateral test conditions). Age-matched listeners with normal hearing served as a control group. The study revealed that listeners with hearing loss did obtain benefit from spatial separation of the target and maskers, but the spatial release from masking was less than that obtained by listeners with normal hearing. Greater benefit was obtained in the bilateral listening condition. The third article, by Oxenham, focuses on the role of pitch in grouping sounds from the same source and segregating sounds from different sources when listening in complex environments. The article provides a tutorial on current models of pitch perception, as well as a review of research on pitch perception and pitch and sound source segregation by listeners with normal hearing and hearing loss. The importance of preserving spectral information and temporal fine structure in hearing aid and cochlear implant processors is emphasized. In the final article, Wang describes an engineering approach to the problem of separating signal from noise. Wang introduces the idea of using time-frequency masks for the purpose of separating speech from interfering noise and also reviews recent research in this area. The suitability of this approach for hearing aid applications is discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call