Abstract

SummaryGenerating immersive virtual reality avatars is a challenging task in VR/AR applications, which maps physical human body poses to avatars in virtual scenes for an immersive user experience. However, most existing work is time‐consuming and limited by datasets, which does not satisfy immersive and real‐time requirements of VR systems. In this paper, we aim to generate 3D real‐time virtual reality avatars based on a monocular camera to solve these problems. Specifically, we first design a self‐attention distillation network (SADNet) for effective human pose estimation, which is guided by a pre‐trained teacher. Secondly, we propose a lightweight pose mapping method for human avatars that utilizes the camera model to map 2D poses to 3D avatar keypoints, generating real‐time human avatars with pose consistency. Finally, we integrate our framework into a VR system, displaying generated 3D pose‐driven avatars on Helmet‐Mounted Display devices for an immersive user experience. We evaluate SADNet on two publicly available datasets. Experimental results show that SADNet achieves a state‐of‐the‐art trade‐off between speed and accuracy. In addition, we conducted a user experience study on the performance and immersion of virtual reality avatars. Results show that pose‐driven 3D human avatars generated by our method are smooth and attractive.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call