Abstract

By convolving an audio stream with a given pair of impulse responses between a source position and the two ears, virtual sound scenes can be created over headphones. Typically, the set of these filters for an ensemble of spatial positions, termed the Head-Related Impulse Response (HRIR) is used to render position information of a sound object to a listener. However, HRIRs are measured in free-field conditions, ignoring room reflections. In the real world, multiple reflections and reverberation exist, producing complex rich sound spaces. Including room reflections and reverberation with the HRIR results in a binaural room impulse response (BRIR). The length of a given BRIR depend on the shape and volume of the room, with BRIRs having typical duration of several seconds, resulting in computationally long processing. When the virtual environment is updated in response to head/body movement, BRIRs need to be updated according to the relative direction of a sound object within the perceptual detection threshold of system latency. This poses complications for mobile devices where processing power is limited, such as the case of augmented reality. In this paper, the architecture of a new signal processing method by distributed computers is proposed for convolution of BRIRs applicable to such conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call