Abstract

A series of previous studies (Souza et al., 2015; 2018; 2020) developed a “cue profile” test based on synthetic speech sounds used to assess a hearing-impaired listener’s use of speech cues. The resulting “weighting angle” quantifies how well individual listeners can utilize higher-precision spectro-temporal information (such as formant transitions), or whether they rely on lower-precision temporal (amplitude envelope) cues in consonant identification. The fact that the amount of hearing loss was not associated with the cue profile underscores the need to characterize individual abilities in a more nuanced way than can be captured by the pure-tone audiogram (Souza et al., 2020). One drawback to the current test is that it is time-consuming, making it impractical to deploy in clinical settings. This study employed an inter-trial stability metric based on deviations from a moving average to explore an early stopping point of fewer than 200 trials, rather than the full test of 375 trials, a significant reduction of testing time. A 3-way weighting angle classifier, trained via Long Short-Term Memory Network, was piloted for feasibility as an alternative time reduction method. The results, while preliminary, are encouraging. [Work supported by NIH, Grant No. R01 DC006014 (PI: Souza).]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call