Abstract

Visual simultaneous localization and mapping (SLAM) is the underlying support of unmanned systems. Currently, most visual SLAM methods are based on the static environment assumption so that dynamic objects in the camera’s field of view will seriously disrupt its working performance. In view of this, an RGB-D SLAM approach based on probability observations and clustering optimization for highly dynamic environments is proposed, which can effectively eliminate the influence of dynamic objects and accurately estimate the ego-motion of an RGB-D camera. The method contains a dual static map point detection strategy, which is carried out simultaneously in the current and previous frames. First, to enhance tracking robustness in highly dynamic environments, the probabilities of map points being static, calculated by both reprojection deviation and intensity deviation, are used to weight the cost function for pose estimation. Meanwhile, by taking the previous frames as a reference, a static velocity probability based on sparse scene flow is acquired to preliminarily recognize static map points and further improve the tracking accuracy. Then, an improved map point optimization strategy based on K-means clustering is designed, which effectively takes advantage of the clustering algorithm to refine the static map point labels while alleviating its stubborn problem. Finally, the experimental results on the TUM dataset and real scenes compared with the state-of-the-art visual SLAM methods illustrate that the proposed method achieves an extremely robust and accurate performance for estimating camera pose in highly dynamic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call