Abstract

Objective This study aimed to maximise the ability of stimulus-frequency otoacoustic emissions (SFOAEs) to predict hearing status and thresholds based on machine-learning models. Design SFOAE data and audiometric thresholds were collected at octave frequencies from 0.5 to 8 kHz. Support vector machine, k-nearest neighbour, back propagation neural network, decision tree, and random forest algorithms were used to build classification models for status identification and to develop regression models for threshold prediction. Study sample About 230 ears with normal hearing and 737 ears with sensorineural hearing loss. Results All classification models yielded areas under the receiver operating characteristic curve of 0.926–0.994 at 0.5–8 kHz, superior to the previous SFOAE study. The regression models produced lower standard errors (8.1–12.2 dB, mean absolute errors: 5.53–8.97 dB) as compared to those for distortion-product and transient-evoked otoacoustic emissions previously reported (8.6–19.2 dB). Conclusions SFOAEs using machine-learning approaches offer promising tools for the prediction of hearing capabilities, at least at 0.5–4 kHz. Future research may focus on further improvements in accuracy and reductions in test time to improve clinical utility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call