Abstract

The rapid creation of 3D character animation by commodity devices plays an important role in enriching visual content in virtual reality. This paper concentrates on addressing the challenges of current motion imitation for human body. We develop an interactive framework for stable motion capturing and animation generation based on single Kinect device. In particular, we focus our research efforts on two cases: (1) The participant is facing the camera; or (2) the participant is turning around or is side facing the camera. Using existing methods, camera could obtain a profile view of the body, but it frequently leads to less satisfactory result or even failure due to occlusion. In order to reduce certain artifacts appeared at the side view, we design a mechanism to refine the movement of the human body by integrating an adaptive filter. After specifying the corresponding joints between the participant and the virtual character, the captured motion could be retargeted in a quaternion-based manner. To further improve the animation quality, inverse kinematics are brought into our framework to constrain the target’s positions. A large variety of motions and characters have been tested to validate the performance of our framework. Through experiments, it shows that our method could be applied to real-time applications, such as physical therapy and fitness training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call