Abstract

The paper proposes a new approach to the task of unsupervised human motion style transfer which transforms the style of an unlabeled input human motion into the style of another unlabeled motion sequence and preserves the behavior of the primitive motion at the same time. For supervised stylistic motion transfer task, constructing a database including a large quantity of training sample pairs is expensive and tedious for artists. Therefore, we focus on unsupervised methods. However, recent works need to retrain a network for each new character with different styles, which is a time-consuming and resource-intensive process. To tackle this problem, we utilize a meta network which builds a direct-mapping from the style motion sequence to the transformation network, namely, producing a corresponding motion transformation network in the meta network by a feed-forward propagation. The model transfers multiple styles of motion clips to different motions and generates natural stylized motions in real-time without a re-training for every style. Besides, our model can transfer styles between two movements depending on different skeletons. We also explore important features of the transfer network manifold by interpolation between the hidden states from meta network. Experiments validate the flexibility and effectiveness of our data-driven model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.