Abstract

In both entertainment and professional applications, conventionally produced stereo or multi-channel audio content is frequently delivered over headphones or earbuds. Use cases involving object-based binaural audio rendering include recently developed immersive multi-channel audio distribution formats, along with the accelerating deployment of virtual or augmented reality applications and head-mounted displays. The appreciation of these listening experiences by end users may be compromised by an unnatural perception of the localization of frontal audio objects: commonly heard near or inside the listener’s head even when their specified position is distant. This artifact may persist despite the provision of perceptual cues that have been known to partially mitigate it, including artificial acoustic reflections or reverberation, head-tracking, individualized HRTF processing, or reinforcing visual information. In this paper, we review previously reported methods for binaural audio externalization processing, and generalize a recently proposed approach to address object-based audio rendering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call