Abstract

Pose Guided Person Image Generation (PGPIG) is the task of transforming a person's image from the source pose to a target pose. Existing PGPIG methods often tend to learn an end-to-end transformation between the source image and the target image, but do not seriously consider two issues: 1) the PGPIG is an ill-posed problem, and 2) the texture mapping requires effective supervision. In order to alleviate these two challenges, we propose a novel method by incorporating Dual-task Pose Transformer Network and Texture Affinity learning mechanism (DPTN-TA). To assist the ill-posed source-to-target task learning, DPTN-TA introduces an auxiliary task, i.e., source-to-source task, by a Siamese structure and further explores the dual-task correlation. Specifically, the correlation is built by the proposed Pose Transformer Module (PTM), which can adaptively capture the fine-grained mapping between sources and targets and can promote the source texture transmission to enhance the details of the generated images. Moreover, we propose a novel texture affinity loss to better supervise the learning of texture mapping. In this way, the network is able to learn complex spatial transformations effectively. Extensive experiments show that our DPTN-TA can produce perceptually realistic person images under significant pose changes. Furthermore, our DPTN-TA is not limited to processing human bodies but can be flexibly extended to view synthesis of other objects, i.e., faces and chairs, outperforming the state-of-the-arts in terms of both LPIPS and FID. Our code is available at: https://github.com/PangzeCheung/Dual-task-Pose-Transformer-Network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call