Abstract

An increasing number of virtual reality applications now use full-body avatars to represent the user in virtual environments. To fully control these virtual avatars, movement-tracking technology is required. However, most full-body tracking solutions are expensive and often cumbersome and time consuming to setup and use. Affordable depth cameras, on the other hand, are easy to set up, but most lack the ability to fully track a user’s body and fingers and have only limited accuracy. In this paper, we present a solution for combining multiple depth cameras to allow accurate full body movement tracking, including accurate hand and finger tracking. This provides users with the possibility of using natural gestures to interact in the virtual environment. In particular, we improve on previous work in the following five aspects. We have, (1) extended the calibration procedure to eliminate the tracking offsets between the RGB and depth cameras, (2) optimized facing-direction detection to improve the stability of data fusion, (3) implemented two new weighting methods for the depth data fusion of multiple cameras, (4) added the ability to also fuse joint-rotation data, and (5) integrated a short-range depth camera for finger tracking. We evaluated the system empirically and show that our new methods improved previous work in terms of tracking accuracy and particularly reduced the coupled hand-lifting phenomenon.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call