Abstract

Generally, a target position of a hand or a foot is input in a virtual space by using a mouse as a user interface of a legged robot. This type of a legged robot interface is used by teams participating in Darpa Robotics Challenge. However, the operation time is too long. Also it is difficult for the operators to recognize the spatial relationship between a robot and an environment. To solve these problems, we have proposed an interface that mixes and presents both visual and haptic sense. In the previous research, we developed a visual-haptic fusion system for teleoperation of a crawler robot. In this study, we adapt the system to the operation of a legged robot. We verify the effectiveness of the proposed system by comparison with the conventional method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call