Abstract

Virtual reality (VR) requires rendering accurate head-related transfer functions (HRTF) to ensure a realistic and immersive virtual auditory space. An HRTF characterizes how each ear receives sound from a certain location in space based on the shape of the head, torso, and pinnae, and provides a unique head-related impulse response (HRIR) for each given source location. Since HRTFs are person-specific and difficult to measure, recent research has utilized pre-existing HRTF databases and anthropometric measurements to generate personalized HRTFs with machine learning algorithms. This study investigates a personalization method that estimates the shape of each ear’s HRIR and interaural time differences (ITD) between the two ears in separate models. In the proposed method, the shape of the HRIR is estimated with an artificial neural network (ANN) trained with time-aligned HRIRs from the CIPIC database, eliminating between-subject timing differences. A regression tree is used to estimate the ITDs, which are integer sample delays between the left and right ears. A localization test with a VR headset was conducted to evaluate the perceptual accuracy of the personalized HRTFs. Subjects completed the test with both a pre-selected average HRTF and their personalized HRTF to compare localization errors between the two conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call