Abstract

Semantic information has proven to be an enabling factor for robots to better understand their surroundings. In this paper, we propose an RGB-D based semantic simultaneous localization and mapping (SLAM) framework for rescue robots. By augmenting the RGB-D SLAM system with a convolutional neural network (CNN), our framework can generate not only dense geometric point-cloud maps but also corresponding point-wise semantic information (i.e. a semantic map). After obtaining the semantic map, the rescue robot can distinguish types of terrains, so as to avoid obstacles and find out paths with higher traversability. On the one hand, we utilize depth information to determine whether neighboring pixels in semantic images belong to the same object, so as to filter the segmentation results of each frame. On the other hand, we filter the semantic map by accumulating data of multiple frames and searching consistent semantic labels. To validate the efficiency of the proposed semantic SLAM framework, we generate an RGB-D dataset of the RoboCup Rescue-Robot-League (RRL) competition environment. The experiment proves that our semantic SLAM framework can generate dense and accurate semantic maps for the complex RRL competition environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call