Abstract

Due to the development of the computer vision, machine learning, and deep learning technologies, the research community focuses not only on the traditional SLAM problems, such as geometric mapping and localization, but also on semantic SLAM. In this paper, we propose a Semantic SLAM system which builds the semantic maps with object-level entities, and it is integrated into the RGB-D SLAM framework. The system combines object detection module that is realized by the deep-learning method, and localization module with RGB-D SLAM seamlessly. In the proposed system, object detection module is used to perform object detection and recognition, and localization module is utilized to get the exact location of the camera. The two modules are integrated together to obtain the semantic maps of the environment. Furthermore, to improve the computational efficiency of the framework, an improved Octomap based on the Fast Line Rasterization Algorithm is constructed. Meanwhile, for the sake of accuracy and robustness of the semantic map, conditional random field is employed to do the optimization. Finally, we evaluate our Semantic SLAM through three different tasks, i.e., localization, object detection, and mapping. Specifically, the accuracy of localization and the mapping speed is evaluated on TUM data set. Compared with ORB-SLAM2 and original RGB-D SLAM, our system, respectively, got 72.9% and 91.2% improvements in dynamic environments localization evaluated by root-mean-square error. With the improved Octomap, the proposed Semantic SLAM is 66.5% faster than the original RGB-D SLAM. We also demonstrate the efficiency of object detection through quantitative evaluation in an automated inventory management task on a real-world data sets recorded over a realistic office.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call