Abstract

The ability to localize sound sources in 3D-space was tested in humans. Five normal-hearing (NH) subjects listened via headphones to noises filtered with subject-specific head-related transfer functions. Four bilateral cochlear implant (CI) subjects listened via their clinical speech processors to noises filtered with subject-specific behind-the-ear head-related transfer functions. A virtual structured environment was presented via head mounted display. Two conditions were used: a condition where the subjects were naive and had no response feedback and a learning condition where the subjects were trained by providing extensive feedback during the test. Response feedback was provided via the visual virtual environment. The results show that the CI listeners performed generally worse than NH listeners, both in the horizontal and vertical dimensions. Both subject groups were able to learn to better localize sound sources, which is supported by lower localization errors in the learning condition. However, in the learning condition, the CI listeners showed a front/back confusion rate comparable to naive NH listeners, which was two times higher than for the trained NH listeners. These results indicate the necessity of new CI processing strategies which include spectral localization cues. Funding by FWF (P18401-B15).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call