Abstract

In the modern digital world, the use of animated applications has increased significantly and such applications have quietly become an integral part of life. Capturing human motions is a fundamental aspect of these applications which is performed by motion-capturing (MoCap) systems. Due to the heterogeneity of MoCap systems and marker configurations, the captured skeletons may vary in their structure, number of joints, and bone lengths. The need for motion retargeting arises to transfer motion captured from one articulated actor to another. Although motion retargeting is a long-standing issue, current methods cannot automatically execute retargeting between skeletons.This paper presents the retraining of a popular motion retargeting network with a new large-scale motion dataset. The retraining aims to demonstrate the capabilities of the motion retargeting network to handle the problem of motion retargeting between skeletons that may have different structures. The paper first, presents the motion retargeting network that employs an encoder-decoder architecture. Afterward, we re-train the motion retargeting network with a new MoCap dataset. To validate the effectiveness of the network, we visually compare the generated results with the existing retargeting model. The results show that the motion retargeting network effectively retargets the different motions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call