Abstract
A theory for capturing an audio scene and then rendering it remotely over headphones is developed. The method relies on capture of a sound field up to a certain order in terms of spherical wave functions. Then, the captured sound field is convolved with the head‐related transfer function and rendered to provide presence in the auditory scene. The sound‐field representation is then transmitted to a remote location for immediate rendering or stored for later use. A system that implements the capture using a spherical array is developed and tested. Head‐related transfer functions are measured using the system described in [D.N. Zotkin et al., J. Acoustic. Soc. Am. (to appear)]. The sound renderer, coupled with the head tracker, reconstructs the acoustic field using individualized head‐related transfer functions to preserve the perceptual spatial structure of the audio scene. [Work partially supported by VA.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.