Abstract

1) To describe distortion product otoacoustic emission (DPOAE) test performance when a priori response criteria are applied to a large set of DPOAE data. 2) To describe DPOAE test performance when multifrequency definitions of auditory function are used. 3) To determine DPOAE test performance when a single decision regarding auditory status is made for an ear, based on DPOAE data from several frequencies. 4) To compare univariate and multivariate test performance when multifrequency gold standard definitions and response criteria are applied to DPOAE data. DPOAE and audiometric data were analyzed from 1267 ears of 806 subjects. These data were evaluated for three different frequency combinations (2, 3, 4 kHz; 2, 3, 4, 6 kHz; 1.5, 2, 3, 4, 6 kHz). DPOAE data were collected for each of the f2 frequencies listed above, using primary levels (L1/L2) of 65/55 dB SPL and a primary ratio (f2/f1) of 1.22. Sensitivity and specificity were evaluated for signal to noise ratios (SNRs) of 3, 6, and 9 dB, which are in common clinical use. In addition, test performance was evaluated using clinical decision theory, following the convention we have used in previous reports on otoacoustic emission test performance. Both univariate and multivariate analyses techniques were applied to the data. In addition to evaluating DPOAE test performance for the case when audiometric and f2 frequency were equal, multifrequency gold standards and multifrequency criterion responses were evaluated. Three new gold standards were used to assess test performance: average pure-tone thresholds, extrema thresholds that took into account both the magnitude of the loss and the number of frequencies at which hearing loss existed, and a combination of the two. These new gold standards were applied to each of the three frequency groups described above. As expected, SNR criteria of 3, 6, and 9 dB never resulted in perfect DPOAE test performance. Even the most stringent of these criteria (9 dB SNR) did not result in a sensitivity of 100%. This result suggests that caution should be exercised in the interpretation of DPOAE test results when these a priori criteria are used clinically. Excellent test performance was achieved when auditory status was classified on the basis of the new gold standards and when either SNR or the output of multivariate logistic regressions (LRs) were used as criterion measures. Invariably, the LR resulted in superior test performance compared with what was achieved by the SNR. For SNR criteria of 3, 6, and 9 dB and (by definition) for the LR, specificity, in general, exceeded 80% and often was greater than 90%. Sensitivity, however, depended on the magnitude of hearing loss. Diagnostic errors, when they occurred, were more common for patients with mild hearing losses (21 to 40 dB HL); sensitivity approached 100% once the hearing loss exceeded 40 dB HL. The largest differences between test performance based on SNR or LR occurred for the ears with mild hearing loss, where the LR resulted in more accurate diagnoses. It should not be assumed that the use of a priori response criteria, such as SNRs of 3, 6, or 9 dB, will identify all ears with hearing loss. Test performance when multifrequency gold standards are used to define an ear as normal or impaired and when data from multiple f2 frequencies are used to make a diagnosis, resulted in excellent test performance, especially when the LR was used. When predicting auditory status with multifrequency gold standards, the LR resulted in relative operating characteristic curve areas of 0.95 or 0.96. An output from the LR can be selected that results in a specificity of 90% or better. When the loss exceeded 40 dB HL, the same output from the LR resulted in test sensitivity of nearly 100%. These were the best test results that were achieved. (ABSTRACT TRUNCATED)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call