Abstract

AbstractIn avatar‐mediated telepresence systems, a similar environment is assumed for involved spaces, so that the avatar in a remote space can imitate the user's motion with proper semantic intention performed in a local space. For example, touching on the desk by the user should be reproduced by the avatar in the remote space to correctly convey the intended meaning. It is unlikely, however, that the two involved physical spaces are exactly the same in terms of the size of the room or the locations of the placed objects. Therefore, a naive mapping of the user's joint motion to the avatar will not create the semantically correct motion of the avatar in relation to the remote environment. Existing studies have addressed the problem of retargeting human motions to an avatar for telepresence applications. Few studies, however, have focused on retargeting continuous full‐body motions such as locomotion and object interaction motions in a unified manner. In this paper, we propose a novel motion adaptation method that allows to generate the full‐body motions of a human‐like avatar on‐the‐fly in the remote space. The proposed method handles locomotion and object interaction motions as well as smooth transitions between them according to given user actions under the condition of a bijective environment mapping between morphologically‐similar spaces. Our experiments show the effectiveness of the proposed method in generating plausible and semantically correct full‐body motions of an avatar in room‐scale space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call