Abstract

Lack of situation awareness significantly decreases the performance in missions where a mobile robot is operated by a human from remote. The user interface is a key influencing element for situation awareness of the human operator. The information from the remote site is limited to what the robot's sensors can provide. In addition, this information is in general only delivered with a certain - maybe varying - communication delay. Predictive displays provide a promising approach to cope with these problems. In order to increase the situation awareness for the human operator in teleoperation scenarios predictive user interfaces can be used to achieve an artificial excoentric view.This work presents an approach how a predictive mixed reality user interface can be realized with the help of motion control theory. The human operator commands a virtual robot projected into the camera image delivered to the human operator from the physical robot. Hereby a trajectory for the real physical robot is generated, which is executed by the physical robot after a certain time. Combined with mixed reality technologies an artificial, exocentric view of the mobile robot is achieved which leads to a short time predictive user interface for mobile-robot teleoperation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call