Abstract

Human pose imitation, which aims to generate an image with a source character’s appearance, the source character’s shape, and a target character’s posture, has many potential applications in virtual reality, augmented reality, games, movies, etc. It is incredibly challenging due to non-rigid human body motions, significant variations in clothing textures, and self-occluded human bodies in 2D images. In this paper, we propose Poxture, a novel human posture imitation method with neural texture, to address the challenges mentioned above. Concretely, first, we build a dense mapping between a source SMPL human body model (shape and posture) and its corresponding texture (appearance). Then, we apply a neural texture generator to recover the complete texture of the source character. At last, we wrap the source neural texture to the source SMLP model with a target pose to generate the desired image by a GAN model. Poxture does not require any annotations, and our framework can fully disentangle the source character’s appearance, shape, and pose, which enjoys several advantages: 1) It can synthesize high-resolution images with detailed textures, thanks to the learned neural textures containing both visible and invisible parts and high-frequency information; 2) It can imitate complex actions with various appearances and body figures since the complete texture of the source character is acquired. We compare our method with previous methods, showing state-of-the-art results on two challenging benchmarks. Extensive experiments demonstrate that, given any character, our method can manipulate this avatar imitating arbitrary posture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call