Abstract

Extending the frontier of visual computing, sound rendering utilizes sound to communicate information to a user and offers an alternative means of visualization. By harnessing the sense of hearing, audio rendering can further enhance a user's experience in a multimodal virtual world and is required for immersive environments, computer games, simulation, training and designing next generation human-computer interfaces. In this talk, we will give an overview of our recent work on sound synthesis and sound propagation. These include generating realistic physically-based sounds from rigid body dynamics simulations and liquid sounds based on bubble resonance and coupling with fluid simulators. We also describe new and fast algorithms for sound propagation based on improved numerical techniques and fast geometric sound propagation. Our algorithms improve the state of the art in sound propagation by almost 1–2 orders of magnitude and we demonstrate that it is possible to perform interactive propagation in complex, dynamic environments by utilizing the computational capabilities of multi-core CPUs and many-core GPUs. We will also show some preliminary results on the design of next-generation musical instruments using multi-touch interfaces. Joint work with faculty and students of GAMMA group at UNC Chapel Hill.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call