Abstract

Recently, view-based or appearance-based approaches have been attracting the interests of computer vision research. We have already proposed a visual view-based navigation method using a model of the route called the “View Sequence,” which contains a sequence of front views along a route memorized in the recording run. In this paper, we apply an omnidirectional vision sensor to our view-based navigation and propose an extended models of a route called the “Omni-View Sequence.” Matching of Omni-Views are achieved using template matching by hardware. The omnidirectional vision sensor is a desirable sensor for real-time view-based recognition of a mobile robot in that the all information around the robot can be acquired simultaneously. As the view-based recognition can be more stable if the view contains more information, the robot can realize more stable matching and more accurate navigation compared with our former navigation method using the View Sequence.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call