Abstract

Applying motion capture data for multi-person interaction to virtual characters is challenging because one needs to preserve interaction semantics in addition to satisfying the general requirements for motion retargeting, such as preventing penetration and preserving naturalness. An efficient method for representing the scene semantics of interaction motions is to define the spatial relationships between body parts of characters. However, existing methods of this kind consider only character skeleton, and thus may require post-processing to refine the interaction motions and remove artifacts from the viewpoint of skin meshes. This paper proposes a novel method for retargeting interaction motions with respect to character skins. To this end, we introduce the aura mesh surrounding a character's skin in order to represent skin-level spatial relationships between body parts. Using the aura mesh, we can retarget interaction motions while preserving skin-level spatial relationships and reducing skin inter-penetrations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.