Abstract
Mathematical models can be very useful for understanding complicated systems, and for testing algorithms through simulation that would be difficult or expensive to implement. This paper describes the proposal for a model that would simulate the sound localization performance of profoundly hearing-impaired persons using bilateral cochlear implants (CIs). The expectation is that this model could be used as a tool that could prove useful in developing new signal processing algorithms for neural encoding strategies. The head related transfer function (HRTF) is a critical component of this model, and provides the base characteristics of head shadow, torso and pinna effects. This defines the temporal, intensity and spectral cues that are important to sound localization. This model was first developed to simulate normal hearing persons and validated against published literature on HRTFs and localization. The model was then further developed to account for the differences in the signal pathway of the CI user due to sound processing effects, and the microphone location versus ear canal acoustics. Finally, the localization error calculated from the model for CI users was compared to published localization data obtained from this population.
Highlights
A number of techniques have been used by researchers to model the human head related transfer function (HRTF) and its impact on localization of sound
This paper describes the proposal for a model that would simulate the sound localization performance of profoundly hearing-impaired persons using bilateral cochlear implants (CIs)
One can clearly see that a great deal of spectral information is lost to the cochlear implant user
Summary
A number of techniques have been used by researchers to model the human head related transfer function (HRTF) and its impact on localization of sound These models take into account head shadow and pinna (outer ear) effects, and their impact on interaural (between ears) spectral, timing, and intensity cues. Profoundly hearing-impaired persons who are users of bilateral cochlear implants (CIs) have differing, more limited and degraded cues available to them for sound localization This is due to several factors including that, in current CI sound processors, only some of the interaural intensity and time cues are maintained, the microphone is placed above and behind the ear so that cues are lost from pinna and ear canal acoustics, and the damaged nervous system may not be able to utilize all cues that are provided. Matin / World Journal of Neuroscience 3 (2013) 136-141 ground noise by combining timing, amplitude and spectral difference information from both ears so that there is a better central representation than would be had with only information from one ear, e.g. Zurek [3]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.