Abstract
Cochlear implants (CI) are the most successful sensory prosthetic device developed to date, as judged by their ability to restore, or provide for the first time, hearing function and normal patterns of speech. Compared with attempts to restore vision in the blind or movement in the paralyzed, the restoration of hearing by cochlear implantation is considerably advanced, and potentially comprises a model for how other sensory prosthetic devices might develop. Nevertheless, significant progress will be required if CI listeners are to be provided with hearing abilities that allow them to operate in most every-day listening environments, where multiple sound sources that must be attended to exist against a background of interfering acoustic signals (noise). Current strategies for CI stimulation are dominated by acoustical and psychophysical understandings of normal hearing, and rely on assumptions as to the form of the underlying neural processes, the consequences of hearing loss on central auditory structure and function, and the means by which these (assumed) functions are both impaired and amenable to recovery following implantation. It is perhaps not surprising that this is the case; the neural basis for the extraction and representation of the pitch of complex sounds, for example – a fundamental aspect of speech and many environmental sounds – remains poorly understood in normal hearing, let alone following cochlear implantation. It is clear that CI research will continue against a backdrop of advances in our understanding of normal hearing function (and improved technology for implantable devices). Nevertheless, the extent to which current CI technologies can, or ever will be able to, provide the auditory brain with what is required to function in listening environments in which normal-hearing individuals perform seemingly effortlessly remains an open question. An analogy with current automatic speech recognition (ASR) software is apposite, Both CI and ASR suffer from the same deficits in performance; reconstruction of the source (or intelligibility) is highly compromised by background noise, the ability to segregate even two sources or ‘stream’ a single source at even relatively high signal-to-noise ratios is poor, and there is often a requirement to provide prior or additional information concerning the nature of the signal before it can be effectively reconstructed/retrieved (Brown et al., 2010). This suggests that the engineering and computational approaches to both fields have reached the same impasse. What then is missing? Clearly what is missing from the equation, literally in the case of ASR, and figuratively in terms of CI, is the auditory brain. Perhaps the first step in progressing towards a neuro-centric approach to CI is to acknowledge the primacy of the brain over cochlear filtering in normal hearing. It is generally agreed that appropriate ‘cochleotopic’ patterns of electrical stimulation are important to successful outcomes in CI (Fallon et al., 2008). Nevertheless, it should also be evident that the entire concept of tonotopy in, for example, a prelingually deafened child, relates to where auditory nerve fibres project, rather than from where on the basilar membrane they formerly (or perhaps never) received synaptic input. That the brain might care about tonotopy in the absence of any cochlear input highlights that the cochlea evolved to serve the needs of the brain and not the other way round.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.