Abstract

The necessity to computationally process human motion to produce realistic and dynamic animations is increasing with the fourth industrial revolution. Motion style transfer provides an appealing alternative to manually creating motions from start by utilizing already recorded motion data to automatically create realistic motion samples. Motion style transfer techniques have been transformed by deep learning algorithms, especially deep neural networks (DNNs). These algorithms are excellent for motion synthesis tasks because they can anticipate future motion styles. A style transfer method called CNN-BiLSTM-ATT (Convolutional Neural Network-Bidirectional Long Short-Term Memory with Attention) is put forth to analyze spatiotemporal features. The strategy tries to realistically synthesize and represent the intricacy of human motion by merging CNNs, BiLSTMs, and attention mechanisms. The difference between reference and source styles is converted to a novel motion that might include never-before-seen movements by extracting spectral intensity representations of each. A temporally sliding window filter is added to the method to do local analysis in time for the processing of heterogeneous motion, greatly improving it. As a result, the method can be used to enrich style databases by filling in missing actions and enhancing the effectiveness of earlier style transfer techniques. Through controlled user studies and quantitative trials, the effectiveness of the suggested strategy is assessed. The outcomes show a notable advancement over earlier studies, emphasizing the method’s capacity to produce thorough and accurate motion sequences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call