Abstract
This study examines the efficiency of a training protocol using a virtual reality application designed to accelerate individual’s selection of, and accommodation to, non-individualized HRTF profiles. This training introduces three elements to hasten audio localization performance improvement: an interactive HRTF selection method, a parametric training program based on active learning, and a relatively dry room acoustic simulation designed to increase the quantity of spatial cues presented. Participants rapidly selected an HRTF (≈5 min) followed by training over three sessions of 12 min distributed over 5 days. To study the impact of the room acoustic component on localization performance evolution, participants were divided into two groups: one acting as control reference, training with only anechoic renderings, the other training in reverberant conditions. The efficiency of the training program was assessed across groups and the entire protocol was assessed through direct comparisons with results reported in previous studies. Results indicate that the proposed training program led to improved learning rates compared to that of previous studies, and that the included room response accelerated the learning process.
Highlights
Binaural synthesis is a signal processing technique used to render spatial auditory scenes over headphones
This paper presented the results of a perceptual study designed to assess a novel Head Related Transfer Functions (HRTFs) selection method and training program, conceived to reduce the time required to obtain acceptable binaural localization performance
The 24 participants of the experiment started by selecting a non-individual HRTF from an existing database, with which they trained during three 12 min sessions
Summary
Binaural synthesis is a signal processing technique used to render spatial auditory scenes over headphones. It relies on the application of direction-dependent audio cues to a monophonic signal, mimicking time and frequency transformations resulting from the propagation of an acoustic wave from a sound source to the listener’s ear canals [1, 2]. Studies have even shown that one can adapt to distorted HRTFs, e.g. in Majdak et al [28] where participants suffering from hearing loss learned to use HRTFs whose spectrum had been warped to move audio cues back into frequency bands they could perceive. Given enough time, users using non-individual HRTFs can achieve localization performance on par with participants using their own individual HRTFs [6, 19]
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have