Abstract

We present a motion sensing technology which consists of constructing a motion graph and performing motion transition to concatenate a number of small motion clips. The purpose is to adequately animate a continuous and meaningful motion sequence when a user interacts with 3D virtual characters through a remote sensing device. Specific user manipulating behavior has been analyzed to automatically associate with locomotion of a virtual character. Once the proposed approach realizes varying environmental stimuli, such as turning, rolling, and clapping operations, we search for a corresponding reference motion and refer the structured motion graph to transit from the current motion to the target motion. In addition, both static and dynamic manipulation phases are taken into account to be autonomously in response to user interactions. Dynamic time warping and blending-based motion synthesis are applied to benefit the motion transition. We demonstrate the adaptability of the proposed technology through a game-based learning application with pre-scripted scenarios in form of interacting with a virtual baby seal in an ice field.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.