Abstract
An audio-visual interface was developed to allow listeners to hear a range of different conditions and convey their preferences. Feasibility for selecting optimal listening conditions was assessed for eight normal-hearing listeners. The visual display was a 4-by-4 matrix with the x and y axes representing two different manipulations of the frequency-gain characteristic. For a speech-in-noise condition, the axes represented low-frequency gain and broadband gain. For a filtered-speech condition, the axes represented high-pass cutoff frequency and bandwidth. Subjects altered the amount of processing applied to the ongoing speech (or speech in noise) by moving a screen pointer from cell to cell. Selections were compared to percent-correct scores for lists constructed from the same speech items (nonsense syllables), and to articulation indices (AIs). It took approximately 2 min to select, and selected conditions typically had AIs within 0.10 of the highest AI in the matrix. A potential application is determining settings for nonlinear hearing aids. [Work supported by NIDCD.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.