Abstract

Generally, a virtual reality (VR) system cannot provide the realistic sound formed by the multi-channel audio signals to a user with the stereo headphone environment. In addition, the VR system has a gap between the visual scene and the sound because it supplies the audio signal having only a constant sound scene without respect to the change of the user's position. To solve these problems, we introduce the sound scene control of binaural sound. Binaural sound is a stereo realistic sound that can be generated by convolving a 10.1 channel audio signal with a head related transfer function (HRTF) coefficients which features all the paths from the multi-channel speaker layout to the ears in free space. However, since a binaural sound is generated using a fixed multi-channel layout and HRTF coefficients, the binaural sound has a constant sound scene and cannot reflect the user's movements. So, we apply the sound scene control scheme that modifies the binaural sound to allow the user's movement in the VR system. Initially, a multichannel layout is re-created according to the user's azimuth change and the original multi-channel signal is mapped to a new ultra multi-channel layout using a constant power panning law. Secondly, the sound level of the new multichannel audio signal is controlled by the user's distance change, using the characteristics of the sound level inversely proportional to distance. Finally, the final multi-channel audio signal is convolved with the HRTF coefficients to produce a binaural sound with the controlled sound scene. As a result, the proposed realistic audio sound generation method allows the current VR system to provide true VR service without distinction between the visual scenes and sounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call