Abstract

Purpose The aim of this study was to develop and evaluate a novel, automated speech-in-noise test viable for widespread in situ and remote screening. Method Vowel-consonant-vowel sounds in a multiple-choice consonant discrimination task were used. Recordings from a professional male native English speaker were used. A novel adaptive staircase procedure was developed, based on the estimated intelligibility of stimuli rather than on theoretical binomial models. Test performance was assessed in a population of 26 young adults (YAs) with normal hearing and in 72 unscreened adults (UAs), including native and nonnative English listeners. Results The proposed test provided accurate estimates of the speech recognition threshold (SRT) compared to a conventional adaptive procedure. Consistent outcomes were observed in YAs in test/retest and in controlled/uncontrolled conditions and in UAs in native and nonnative listeners. The SRT increased with increasing age, hearing loss, and self-reported hearing handicap in UAs. Test duration was similar in YAs and UAs irrespective of age and hearing loss. The test-retest repeatability of SRTs was high (Pearson correlation coefficient = .84), and the pass/fail outcomes of the test were reliable in repeated measures (Cohen's κ = .8). The test was accurate in identifying ears with pure-tone thresholds > 25 dB HL (accuracy = 0.82). Conclusion This study demonstrated the viability of the proposed test in subjects of varying language in terms of accuracy, reliability, and short test time. Further research is needed to validate the test in a larger population across a wider range of languages and hearing loss and to identify optimal classification criteria for screening purposes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.