Abstract

In this paper a neurosymbolic hybrid system for robot self-localization is proposed. This crucial issue in autonomous robotic research field has been tackled with slightly different approaches depending mainly on robot sensors and actuation devices, its available computational resources, and the environment in which it acts. The system we present has been designed for robot endowed with poor sensors (a 16-element sonar array, a camera just performing very light computation, odometric sensors) which acts in an office-like environment. In the proposed landmark based approach, we have considered as "natural" environmental features the corners formed by wall intersections, and corners formed by doors that are present in the scene. The whole problem, from landmark recognition to position estimation, is carried out by a unified neurosymbolic system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call