Abstract

With the rise in remote work culture and increased computing capabilities of head-mounted displays (HMDs), more immersive, collaborative experiences are desired in remote–local mixed/augmented reality (MR/AR). Photorealistic full-body avatar representations of users in remote workspace interactions have shown to have increased social presence, nonverbal behavior, and engagement. However, a direct mapping of the body pose angles from local to the remote workspace will, in most cases, result in positional errors during human–object interaction, caused by the dissimilarity between remote and local workspaces. Hence, the interaction must be retargeted, but it should be retargeted in such a way that the original intent of the body pose should be preserved. However, these two objectives sometimes contradict each other. As a result, a multi-objective optimization (MO) problem can be formulated where the primary objective is to minimize positional errors and the secondary objective is to preserve the original interaction body pose. The current state-of-the-art solution uses an evolutionary computation-based inverse kinematic (IK) approach to solve the MO problem where the weights between the objectives must be set by the user based on trial and error, leading to a suboptimal solution. In this paper, we present a new dynamic weight allocation approach to this problem, where a user has the flexibility to set a chosen minimum error tolerance, and the weights will be distributed between the objectives based on a dynamic allocation algorithm. We have used a two-pronged approach to test the adaptability and robustness of this mechanism: (i) on motion-captured human animations of varying levels of speeds, error tolerances, redirections and (ii) we conducted an experiment involving 12 human participants and recorded, redirected their actions performed during a book-shelving task in AR. Compared to the static weighting, the dynamic weighted mechanism showed a net ([Formula: see text] objective) decrease in error ranging from 20.5% to 34.42% across varying animation speeds and a decrease in error ranging from 11.44% to 36.2% for the recorded human actions during the AR task, demonstrating its robustness and better pose preservation across interactions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call