Abstract

Simultaneous Localization and Mapping (SLAM) plays an important role in the computer vision and robotics field. The traditional SLAM framework adopts a strong static world assumption for analysis convenience. How to cope with dynamic environments is of vital importance and attracts more attentions. Existing SLAM systems toward dynamic scenes either solely utilize semantic information, solely utilize geometry information, or naively combine the results from them in a loosely coupled way. In this paper, we present SOF-SLAM: Semantic Optical Flow SLAM, a visual semantic SLAM system toward dynamic environments, which is built on RGB-D mode of ORB-SLAM2. A new dynamic features detection approach called semantic optical flow is proposed, which is a kind of tightly coupled way and can fully take advantage of feature's dynamic characteristic hidden in semantic and geometry information to remove dynamic features effectively and reasonably. The pixel-wise semantic segmentation results generated by SegNet serve as mask in the proposed semantic optical flow to get a reliable fundamental matrix, which is then used to filter out the truly dynamic features. Only the remaining static features are reserved in the tracking and optimization module to achieve accurate camera pose estimation in dynamic environments. Experiments on public TUM RGB-D dataset and in real-world environment are conducted. Compared with ORB-SLAM2, the proposed SOF-SLAM achieves averagely 96.73% improvements in high-dynamic scenarios. It also outperforms the other four state-of-the-art SLAM systems which cope with the dynamic environments.

Highlights

  • Simultaneous Localization and Mapping (SLAM) constructs a map of the surrounding world using the data collected by the platform operating SLAM system, and simultaneously locates itself within the map

  • CONTRIBUTION AND OUTLINE In this paper we propose a visual semantic SLAM system toward dynamic environment, i.e. Semantic Optical Flow SLAM (SOF-SLAM), which is built on ORB-SLAM2

  • EVALUATION We have carried out an experiment of our SOF-SLAM system in public TUM RGB-D dataset to evaluate its performance in dynamic environments

Read more

Summary

Introduction

Simultaneous Localization and Mapping (SLAM) constructs a map of the surrounding world using the data collected by the platform operating SLAM system, and simultaneously locates itself within the map. The proposed SOF-SLAM system can highly reduce the influence of dynamic objects in the environment using our dynamic features detection and removal approach, i.e., semantic optical flow, which detect dynamic features with geometry and semantic information in a tightly coupled way. Our contribution can be summarized as follows: the proposed SOF-SLAM fully utilizes the complementary characteristic of motion prior information from semantic segmentation and motion detection information from epipolar geometry constraint, while the existing SLAM systems either solely depend on semantic information or geometry information, or naively combine the results of them to remove dynamic features.

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.