Abstract

Spectral ripple discrimination is a popular measure of spectral resolution that has been shown to correlate with speech recognition scores in cochlear implant (CI) listeners. In the test, listeners distinguish sounds with varying density of spectral peaks, with some spectral modulation depth. We argue that there are numerous significant flaws with the application of the test specifically in CI listeners. To start, the spectrum is aliased by the CI processor in a way that is similar to frequency aliasing for under-sampled time series. Beyond a critical spectral density, the spectral envelope changes in a chaotic fashion and is no longer under experimenter control. This critical density is exceeded in numerous published studies. Furthermore, the densities linked with “good” performance are not only outliers, but are entirely unrelated to the spectral densities of real speech sounds, and likely exhibit undue leverage over correlation values. Additionally, there are reports of experience and learning effects, inconsistent with the often-stated goals of the test to avoid such factors. We show how artefactual nonlinearities at high spectral densities may unintentionally match the spectral envelope characteristics of speech sounds—an unfortunate result that likely has given spurious results that sustain the use of this test.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call