Abstract

Human body pose transfer is to transform the character image from the source image pose to the target pose. In recent years, the research has achieved great success in transforming the human body pose from the source image to the target image, but it is still insufficient in the detailed texture of the generated image. To solve the above problems, a new two-stage TPIT network model is proposed to process the detailed texture of the pose-generated image. The first stage is the source image self-learning module, which extracts the source image features by learning the source image itself and further improves the appearance details of pose-generated image. The other stage is to change the pose of the figure gradually from the source image pose to the target pose. Then, by learning the feature correlation between source and target images through cross-modal attention, texture transmission between images is promoted to generate finer-grained details of the generated image. A large number of experiments show that the model has superior performance on the Market-1501 and DeepFashion datasets, especially in the quantitative and qualitative evaluation of Market-1501, which is superior to other advanced methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.