Abstract
Current low-cost 3-D sound displays do not use individualized head-related transfer functions (HRTFs) to render acoustic space. As a result, sound source localization accuracy is often degraded when compared to the accuracy using real sources, or to higher quality displays using individualized HRTFs. Here, a way to improve accuracy was examined in which listeners were provided with paired auditory and visual feedback as to the correct sound source location. Sound localization accuracy was assessed for six listeners, using a large number of virtual sound sources sampled from a spherical grid surrounding the listener, before, during, and after feedback training. Feedback training markedly improved localization accuracy compared to a control group of five listeners that did not receive training. The largest improvements in accuracy resulted from listeners’ enhanced abilities to distinguish sources in front from sources behind. Further, these improvements were not transient short-term effects, but lasted at least 4 days between training and testing sessions. These results suggest that simple and relatively short periods of feedback training (two 45-min sessions) can effectively facilitate perceptual re-mapping to modified spatial cues, and therefore mitigate technical deficiencies in 3-D sound systems due to nonindividualized HRTFs. [Work supported by ARL-FedLab.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.