Abstract

We present a practical and effective method for human action transfer. Given a sequence of source action and limited target information, we aim to transfer motion from source to target. Although recent works based on GAN or VAE achieved impressive results for action transfer in 2D, there still exists a lot of problems which cannot be avoided, such as distorted and discontinuous human body shape, blurry cloth texture and so on. In this paper, we try to solve these problems in a novel 3D viewpoint. On the one hand, we design a skeleton-to-3D-mesh generator to generate the 3D model, which achieves huge improvement on appearance reconstruction. Furthermore, we add a temporal connection to improve the smoothness of the model. On the other hand, instead of directly utilizing the image in RGB space, we transform the target appearance information into UV space for further pose transformation. Specially, unlike conventional graphics render method directly projects visible pixels to UV space, our transformation is according to pixel’s semantic information. We perform experiments on Human3.6M and HumanEva-I to evaluate the performance of pose generator. Both qualitative and quantitative results show that our method outperforms methods based on generation method in 2D. Additionally, we compare our render method with graphic methods on Human3.6M and People-snapshot. The comparison results show that our render method is more robust and effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call