Abstract

Both visual information and sound are required in immersive virtual reality. This paper proposes a computational method for fast synthesis of plausible fire sound that is synchronized with physically based fire animations. We divide fire sound into two parts: low frequency and mid- to high frequency, and use two processes to separately synthesize these two parts. By simplifying calculations using a novel combustion sound model as well as leveraging GPU parallel computing in a marching-cube-like manner, our method speeds up the computation of low-frequency part by an order of magnitude. To run the time-stepping fire simulation at a relative low frequency rather than the audio rate, we add synchronized mid- and high-frequency wavelet details to low-frequency simulation contents with a post-process to generate complete fire sound. We validated our method with various experiments to build a solid physically based basis for real-time acoustic rendering that can be used for immersive virtual reality scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call