Abstract

While graphics and visual representations for Virtual Reality (VR) systems are very well developed, the manner in which audio signals and acoustic environments are recreated in a VR system is not. In the case of audio spatialization, the current approach makes use of a library of standard head related transfer functions (HRTFs), i.e., a user selects a generic HRTF from a library, with limited personal information. It is essentially a “best-guess” representation of that individual’s HRTF. This limits the accuracy of audio developments for virtual reality. This paper reports on results from localization tests to determine the capabilities of a generic HRTF used in a VR environment. Volunteers entered a VR world, and an invisible sound source made a short bursts of white noise at various position in the room. Volunteers were asked to point to the location of the sound source, and results were captured to the nearest millimeter using the VR’s motion tracking system. It is proposed that future versions of this experimental methodology will enable the development of a pseudo-personalized HRTF, unique to each individual VR user.While graphics and visual representations for Virtual Reality (VR) systems are very well developed, the manner in which audio signals and acoustic environments are recreated in a VR system is not. In the case of audio spatialization, the current approach makes use of a library of standard head related transfer functions (HRTFs), i.e., a user selects a generic HRTF from a library, with limited personal information. It is essentially a “best-guess” representation of that individual’s HRTF. This limits the accuracy of audio developments for virtual reality. This paper reports on results from localization tests to determine the capabilities of a generic HRTF used in a VR environment. Volunteers entered a VR world, and an invisible sound source made a short bursts of white noise at various position in the room. Volunteers were asked to point to the location of the sound source, and results were captured to the nearest millimeter using the VR’s motion tracking system. It is proposed that future versions of this...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call