Abstract

We introduce annotated dynamic texture graph (ADTG) for nonlinear motion synthesis, with applications to learning models of human pose and motion from capture data. Our method is based on clustering the motion data into motion primitives that capture local dynamical characteristics - dynamic texture, modeling the dynamics in each cluster using linear dynamic system (LDS), annotating those LDS' which have clear meaning and calculating the cross-entropy between frames of LDS' to construct a directed graph which has two-level structure. The lower level retain the detail and nuance of live motion, while the higher level generalizes motion and encapsulates connections among LDS'. Our results show that this framework can generates lifelike, controllable motion in interactive environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call