Abstract

We have applied teaching by showing method for the navigation of autonomous mobile robots that are driven by vision system. Some scenes are shown by human as the features of the environment, in combination with the motion information as its attribute. And then autonomous mobile robot navigates paths between feature points by comparing current image information with given image information. If the robot determines whether those images coincide, the robot reads the motion command which is associated to the given image information. The benefit of this method is that total image data size becomes much smaller than other conventional methods that use sequential image data. We have implemented this algorithm to the navigation of an autonomous omni-directional mobile robot ZEN. The result was quite successful in a given environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call