Abstract

The rapid shift to remote work has forever changed the dynamic of teams, which require the best tools to work collaboratively across any distance. Rensselaer is home to two immersive virtual environments, intelligent rooms with panoramic, human-scale projection screens, spatial audio loudspeaker arrays, and networks of time-of-flight and acoustical tracking sensors. This project seeks to “colocate” teams across both sites such that the experience mimics collaborating within the same room. Simple video conferencing software are rarely well-configured for large groups and do not consider users’ spatial arrangements. This approach captures ultra-low-latency video and audio feeds of each space for presentation at either end, enabling a group at each location to communicate at 1-to-1 scale with the other. A spherical microphone array tracks multiple simultaneous speakers and adjusts their spatial positions across a Wave Field Synthesis loudspeaker array, maintaining audio-visual congruency. Users at each site can use the panoramic displays to present panoramic imagery or immersive data to be explored simultaneously by each group, facilitating a collaborative immersive experience, and interactions with the screen at one site may be echoed at the other to create the appearance of a single space. [Work supported by CISL, NSF No. 1229391, Army DURIP No. 68604-CS-RIP.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call