Abstract

Existing auralization frameworks for interactive virtual environments have found applications in simulating acoustic conditions for binaural listening and real-time audiovisual navigation. This work, situated at the Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), extends elements of these frameworks for human-scale interactive audiovisual display with movement of virtual sound sources and non-static virtual representation of the facility’s footprint. The work involves the development of an adaptive acoustic ray-tracing prototype that is capable of generating impulse responses for individual virtual loudspeaker representation based upon changes of room orientation in virtual space at runtime. Through the integrated use of game engines (i.e., Unity and Unreal Engine), the prototype is presented in the context of dynamic audiovisual display, and actively analyzes the virtual scene geometries using a multi-detailed rendering approach. With both reconstructed high-resolution 3D models of existing spaces, and automatically generated virtual landscapes from geo-spatial data, the developed system is evaluated both in terms of computational efficiency, and in terms of conventional room acoustics parameters using model-based acoustic energy decay analysis across listening region. [Work supported by the Cognitive Immersive Systems Laboratory (CISL) and NSF IIS-1909229.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.