Abstract

For many years, SLAM algorithms for dynamic environments have been studied. Most methods use semantic segmentation models and it was applied to SLAM by erasing a predetermined type of dynamic object. However, these methods ignored static elements that could exist within dynamic objects in the SLAM process. In this paper, we propose an RGB-D Visual SLAM method using Scene flow and Conditional Random Field in dynamic environments. The proposed method uses static elements inside dynamic objects for Visual Odometry. First, we use dense optical flow to obtain pixel matching between frames and RANSAC algorithms to obtain relative pose. Then, we use depth maps between scenes and matching information to obtain Scene Flow. We calculate dynamic likelihood from this scene flow and create dynamic mask and modify it to be resistant to noise using the Conditional Random Field. We conducted experiments in TUM dataset containing dynamic objects. In experiment, this algorithm has been able to achieve similar or better results than the previeous method using semantic segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call