Abstract

We propose Bayesian approaches for semantic mapping, active localization and local navigation with affordable vision sensors. We develop Bayesian model of egocentric semantic map which consists of spatial object relationships and spatial node relationships. Our topological-semantic-metric (TSM) map has characteristic that a node is one of the components of a general topological map that contains information about spatial relationships. In localization part, view dependent place recognition, reorientation and active search are used for robot localization. A robot estimates its location by Bayesian filtering which leverages spatial relationships among observed objects. Then a robot can infer the head direction to reach a goal in the semantic map. In navigation part, a robot perceives navigable space with Kinect sensor and then moves to goal location while preserving reference head direction. If obstacles are founded in front, then a robot changes the head direction to avoid them. After avoiding obstacles, a robot performs active localization and finds new head direction to goal location. Our Bayesian navigation program provides how a robot should select either an action for following line of moving direction or action for avoiding obstacles. We show that a mobile robot successfully navigates from starting position to goal node while avoiding obstacles by our proposed semantic navigation system with TSM map.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.