This work, situated at Rensselaer’s Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), demonstrates a system that utilizes the facility’s panoramic display and multichannel wave field synthesis loudspeaker array to simulate navigable human-scale urban environments with automatically generated virtual soundscapes. The system positions the CRAIVE-Lab’s virtual footprint within the Unity game engine and provides it with the capability to move within virtual space. With geo-location input, the system uses ArcGIS to extract geospatial features, suchas urban topologies and building extrusions. The same input is also used to retrieve real-time weather data from open-source databases (i.e., OpenWeather). Based upon the extracted information, the system updates acoustic signatures of the virtual surroundings by performing a multi-channelray-tracing analysis at a fixed time frame. The resulting signatures are then used to generate environmental noise profiles and process auto-generatedvirtual sound sources present in the environment using wave field synthesis and an extension of multiple audio datasets typically used for model training in urban sound classification (i.e., UrbanSound8K). We present the results as part of an in situ audiovisual experience where users can stand in the CRAIVE-Lab’s physical enclosure and walk about the virtual landscape in which they are immersed.