Abstract

People with two-arm disabilities need to grasp various densely placed objects in their daily life. However, current arm-free human-robot interfaces (HRIs), such as language-based and gaze-based HRI, are difficult to effectively control the robotic arm to complete the above task. To achieve this task effectively, we innovatively propose an arm-free HRI based on Mixed Reality (MR) feedback and head control, which enables people to give full play to their intelligence in correcting robot perception errors and determining grasp decision-making. MR feedback makes 3D visualization of robot perception results and virtual grippers aligned with real grippers. Head control ensures these disabled people can flexibly and effectively control the gripper's 6D pose. In experiments, we densely place 15 objects of different sizes and shapes from daily life, including transparent and specular objects, such as a measuring tape, transparent bottles, and a charging head, which are grasped by 10 subjects using our HRI. Our experiment results and comparison results show that our HRI can more effectively complete the task, has the adaptability for unseen object grasping, and has high point cloud error tolerance. Presentation video is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://youtu.be/QO72XT8BYGs</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call