Abstract

Localization accuracy is a fundamental requirement for Simultaneous Localization and Mapping (SLAM) systems. Traditional visual SLAM (vSLAM) schemes are usually based upon the assumption of static environments, so they do not perform well in dynamic environments. While a number of vSLAM frameworks have been reported for dynamic environments, the localization accuracy is usually unsatisfactory. In this article, we present a novel motion detection and segmentation method using Red Green Blue-Depth (RGB-D) data to improve the localization accuracy of feature-based RGB-D SLAM in dynamic environments. To overcome the problem due to undersegmentation generated by the semantic segmentation network, a mask inpainting method is developed to ensure the completeness of object segmentation. In the meantime, an optical flow-based motion detection method is proposed to detect dynamic objects from moving cameras, allowing robust detection by removing irrelevant information. Experiments performed on the public Technical University of Munich (TUM) RGB-D data set show that the presented scheme outperforms the state-of-art RGB-D SLAM systems in terms of trajectory accuracy, improving the localization accuracy of RGB-D SLAM in dynamic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call