Abstract

This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation.

Highlights

  • Motion capture technology is known as dynamic capture

  • The human movement style is an indispensable part of the fields of animation, movies, and games

  • This article conducts an in-depth study of human movement synthesis and style transfer

Read more

Summary

Introduction

Motion capture technology is known as dynamic capture. The motion capture devices used to record the movement of people or objects. ANALYSIS OF MOTION TRAINING PROCESS BASED ON DEEP SELF-CODING AND SPATIOTEMPORAL CONSTRAINTS Using the method of deep learning can extract abstract motion capture data features well, and establish appropriate constraints on the motion features to achieve a synthesis between different motions.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call