Abstract

Current simultaneous Localization and Mapping (SLAM) systems usually depend the feature point detection and matching to establish correspondences between frames. However, feature point such as Scale-Invariant Feature Transform (SIFT) or Oriented Fast and Rotated Brief (ORB) may not be successfully detected and matched in less textured scenes. Recent advances in deep learning based optical flow makes it possible to establish stable and dense correspondences between frames even in less textured scenes. We propose in this paper the RAFT-SLAM which integrate an advanced deep learning based optical flow module into the SLAM system, the correspondences estimated by optical flow and feature point matching are fused in a seamless manner yielding high quality cross-frame correspondences which also enhances the accuracy of the localization. Experimental results have demonstrated the effectiveness of proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call