Abstract

In the process of vision-based simultaneous localization and mapping (SLAM), if there is a moving object entering the view field of the camera, a trajectory of the object will be preserved in the constructed point cloud map. However, the other types of maps converted from the point cloud maps cannot be directly used for navigation. This paper studies the dynamic objects elimination in SLAM based on image fusion algorithm. First, we construct a camera motion model for the moving platform. Then, the camera motion is decomposed into two parts: translation and rotation, two constrains were proposed to locate the dynamic regions. Finally, these dynamic regions in the image sequence are set to blank, while we choose one of the image planes as the projection plane, and map the information of other images to the plane to get a fusion image. The results contain all environment information, and the fusion image can be used to replace the image sequence in SLAM. Experiment results demonstrate that our method can eliminate the influence of the dynamic objects effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call