Abstract
This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.
Highlights
Our goal is to realistically model sounding objects for animated realtime virtual environments
This paper presents a new approach to modal synthesis for rendering sounds of virtual objects
Based on the quality of the resulting sounds obtained with our method, and given that increased resolution for the finite elements implies higher memory and computational requirements for modal data, the Finite Element Method (FEM) resolution can be adapted to the number of sounding objects in the virtual scene
Summary
Our goal is to realistically model sounding objects for animated realtime virtual environments. Modal synthesis models the sound of an object as a combination of damped sinusoids, each of which oscillates independently of the others This approach is only accurate for sounds produced by linear phenomena, but can compute these sounds in realtime. Complex sounding objects, that is, with detailed geometries, require a large set of eigenvalues in order to preserve the sound map, that is, the changes in sound across the surface of the sounding object This processing step can be subject to robustness problems. Matching sampled sounds to interactive animation is difficult and often leads to discrepancies between the simulated visuals and their accompanying soundtrack This method requires each specific contact interaction to be associated with a corresponding prerecorded sound, resulting in a timeconsuming authoring process
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.