This paper presents a complete topological navigation system for a resource-constrained mobile robot like Pepper, based on image memory and the teach-and-repeat paradigm. Image memory is constructed from a set of reference images that are acquired during a prior mapping phase and arranged topologically. A* search is used to find the optimal path between the current location and the destination. The images from the robot’s RGB camera are used to localize within the topological graph, and an Image-Based Visual Servoing (IBVS) control scheme drives the robot to the next node in the graph. Depth images update a local egocentric occupancy grid, and another IBVS controller navigates local free-space. The output of the two IBVS controllers is fused to form the final control command for the robot. We demonstrate real-time navigation for the Pepper robot in an indoor open-plan office environment without the need for accurate mapping and localization. Our core navigation module can run completely onboard the robot (which has quite limited computing capabilities) at 5 Hz without requiring any external computing resources. We have successfully performed navigation trials over 15 days, visiting more than 50 destinations and traveling more than 1200m with a success rate of over 80%. We discuss remaining challenges and openly share our software.
Read full abstract