Abstract

Sound-field rendering systems based on multiple loudspeakers provide significant benefits as auditory displays for many applications in virtual audio, multimedia, teleconferencing, and auralization. In contrast to headphone-based binaural reproduction, the performance of loudspeaker rendering systems is independent of the HRTFs of the listener, and the listener is physically decoupled from the rendering system. In addition, many rendering techniques such as wave-field synthesis (WFS) are capable of simultaneously presenting an accurate perceptual experience to multiple listeners. However, the sound-field rendering technique, loudspeaker configuration, and number of sources play an important role in both the physical behavior of the rendering system and the associated localization performance of listeners. In this study, we examine the in situ localization performance of listeners exposed to various loudspeaker rendering systems using virtual sources. Specific reproduction methods investigated are WFS, first- and second-order Ambisonics (including a periphonic implementation), and stereophonic rendering techniques. In addition to virtual sources in the simulated free field, the effect of adding spatialized reverberation is also reviewed. The results are compared with a study of baseline auditory-system localization performance with real sources. [Work supported by the National Science Foundation and Rensselaer Polytechnic Institute.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call