Abstract

Person image generation is a challenging problem due to the complexity of human body structure and the richness of clothing texture. Recent works have made great progress on pose transfer by using keypoints, but cannot characterize the personalized shape attributes. Hence, they have limited person image editing ability, especially in respect of shape editing. In this paper, we propose to use sketches as the expression of the target image, which can not only represent the pose and shape simultaneously but is also flexible to manipulate at the semantic level. We propose DesignerGAN, a novel two-stage model for pose transfer and shape-related attributes editing. The first stage predicts the target semantic parsing using the target sketch and obtains parsing feature maps. In the second stage, with the parsing feature maps and the scaled target sketch, we devise a domain-matching spatially-adaptive normalization method to guide target image generation in multi-level. Qualitative and quantitative comparison results demonstrate our method’s superiority over state-of-the-arts on pose transfer. Besides, we achieve flexible person image editing through simple hand-drawings on sketches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call