Abstract
Human motion prediction refers to forecasting human motion in the future given a past motion sequence, which has significant applications in human tracking, automatic motion generation, autonomous driving, human-robotics interaction, etc. Previous works usually used RNN-based methods, focusing on modeling the temporal dynamics of human motion, which have made great effort on content motions. However, it is unclear for their performance on stylized motion, which is with more expressive emotions and states of the human motion. Different styles within the same motion type have similar motion patterns but also subtle variances. This makes it difficult to be predicted. The main idea of this paper is to learn the spatial characteristic of stylized motion and combine it with the temporal dynamics to achieve accurate prediction. We adopt a transformer-based style encoder to learn the motion representation in the pose space and then maps it to the latent space modeled by the constant variance Gaussian mixture model; meanwhile, we use the hierarchical multi-scale RNN as a temporal encoder to capture the temporal dynamics of human motion; finally, we feed the spatial and temporal features into the prediction decoder to predict the next frame. Our experiments on the Human 3.6 M and Stylized MotionDatasets demonstrate that our model has comparable prediction performance with the state-of-the-art motion prediction works on Human 3.6 M and outperforms previous works on stylized human motion prediction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.