Abstract
This paper deals with the issue of individualizing the head-related transfer function (HRTF) rendering process for auditory elevation perception. Is it possible to find a nonindividual, personalized HRTF set that allows a listener to have an equally accurate localization performance than with his/her individual HRTFs? We propose a psychoacoustically motivated, anthropometry based mismatch function between HRTF pairs that exploits the close relation between the listener's pinna geometry and localization cues. This is evaluated using an auditory model that computes a mapping between HRTF spectra and perceived spatial locations. Results on a large number of subjects in the center for image processing and integrated computing (CIPIC) and acoustics research institute (ARI) HRTF databases suggest that there exists a nonindividual HRTF set, which allows a listener to have an equally accurate vertical localization than with individual HRTFs. Furthermore, we find the optimal parameterization of the proposed mismatch function, i.e., the one that best reflects the information given by the auditory model. Our findings show that the selection procedure yields statistically significant improvements with respect to dummy-head HRTFs or random HRTF selection, with potentially high impact from an applicative point of view.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE/ACM Transactions on Audio, Speech, and Language Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.