Abstract

This study aims to create a shared virtual space for remote telepresence, expanding the range of activities beyond traditional 2D screen-based communication. The challenge lies in seamlessly connecting the virtual and physical environments, enabling free movement and interaction. To address this, in the proposed approach advanced techniques in scene understanding, spatial mapping, and virtual environment generation are employed. Primary data is collected from each participant's space, analyzing crucial information like object placement, room layouts, and interactive elements. This forms the basis for generating a virtual scene that aligns with these features, creating a unified environment. The proposed approach ensures compatibility and meets participants' needs by utilizing a shared function optimization module. Additionally, it enhances the scene further through the use of deep-learning and conditional techniques. The created scene optimally supports shared functionalities for walking, sitting, and working, all of which mirror the physical objects present in the users' actual surroundings. Experiments using the MatterPort3D dataset evaluate the proposed approach, alongside a comparative user study. Results demonstrate the potential of the proposed approach in overcoming challenges of remote telepresence, enabling immersive and interactive experiences in the virtual space. In conclusion, this research tackles the problem of creating a shared virtual space for remote telepresence. Analyzing input data and employing advanced techniques generates a coherent virtual scene.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call