Abstract

Towards large-scale indoor environment, a novel metric-topological 3D map is proposed for robot self- localization based on omnidirectional vision. The local metric map, in a hierarchical manner, defines geometrical elements according to their environmental feature levels. Then, the topological parts in the global map are used to connect the adjacent local maps. We design a nonlinear omnidirectional camera model to project the probabilistic map elements with uncertainty manipulation. Therefore, image features can be extracted in the vicinity of corresponding projected curves. For the self-localization task, a human-machine interaction system is developed using a hierarchical logic. It provides a fusion center which adopts feedback hierarchical fusion method to fuse local estimates generated from multi-observations. Finally, a series of experiments are conducted to prove the reliable and practical performance of our system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call