Abstract

Under ideal laboratory conditions, individualized head‐related transfer functions (HRTFs) can produce virtual sound localization performance approaching the level achieved with real sound sources in the free field. However, in real‐world applications of virtual audio, practical issues such as fit‐refit variability in the headphone response and nonindividualized HRTFs generally lead to much worse localization performance, particularly in the up‐down and front‐back dimensions. Here we present a new technique that “enhances” the localizability of a virtual sound source by increasing the spectral contrast of the acoustic features that are relevant for spatial perception within a set of locations with nearly identical binaural cues (i.e., a “cone‐of‐confusion”). Validation experiments show that this enhancement technique can improve localization accuracy across a broad range of conditions, with as much as a 33% reduction in vertical‐polar localization error for nonindividualized HRTFs measured on a KEMAR manikin; a 25% reduction in vertical‐polar error for nonindividualized HRTFs measured on other human listeners; and a 33% reduction in vertical‐polar error for individualized HRTFs presented under nonideal laboratory conditions (i.e., with headphone fit‐refit variability). These results suggest that the proposed technique could provide benefits across a wide range of real‐world virtual audio display applications. [Work sponsored by AFOSR.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call