Abstract

This paper addresses the problem of modeling head-related transfer functions (HRTFs) for 3-D audio rendering in the front hemisphere. Following a structural approach, we build a model for real-time HRTF synthesis which allows to control separately the evolution of different acoustic phenomena such as head diffraction, ear resonances, and reflections through the design of distinct filter blocks. Parameters to be fed to the model are both derived from mean spectral features in a collection of measured HRTFs and anthropometric features of the specific subject (taken from a photograph of his/her outer ear), hence allowing model customization. Visual analysis of the synthesized HRTFs reveals a convincing correspondence between original and reconstructed spectral features in the chosen spatial range. Furthermore, a possible experimental setup for dynamic psycho acoustical evaluation of such model is depicted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call