Abstract

Mapping human hand motion to robotic hands has great significance in a wide range of applications, such as teleoperation and imitation learning. The ultimate goal is to develop a device-independent control solution based on human hand synergies. Over the past twenty years, a considerable number of mapping methods have been proposed, but most of the mapping methods use intrusive devices, such as the CyberGlove data gloves, to capture human hand motion. Until recently, a very small number of mapping methods have been proposed based on vision-based human hand pose estimation. Traditionally, mapping methods and vision-based human hand pose estimation have been studied independently. To the best of our knowledge, no review has been conducted to summarize the achievements on haptic mapping methods or explore the feasibility of applying off-the-shelf human hand pose estimation algorithms to teleoperation. To address this literature gap, we present the first survey on mapping human hand motion to robotic hands from a kinematic and algorithmic perspective. We discuss the realistic challenges, intuitively summarize recent mapping methods, analyze the theoretical solutions, and provide a teleoperation-oriented human hand pose estimation overview. As a preliminary exploration, a vision-based human hand pose estimation algorithm is introduced for robotic hand teleoperation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call