Abstract

Direct and keyframe based visual SLAM (simulta-neous localization and mapping) such as LDSO is widely used in robotic systems. In the direct method, the direct point projection method is usually used on the selected pixel points to predict motion and perform frame-to-frame tracking. However, such systems require a large number of keyframes to maintain stable motion tracking, resulting in a larger map size in comparison to feature based or indirect SLAM systems. The maintenance of large map data is a bottleneck to long-track robotic systems. To address this problem, we propose to use a machine learning model to predict the optical flow. An indicator activates the learning model when the direct image alignment method has a bad performance. We design a point selection method that uses the pixel flow map predicted by the model. We implement the proposed method on the LDSO system. We evaluate the system performance by the absolute pose error, the number of selected keyframes, and time consumption. Then we verify the proposed method to ORB-SLAM3. The results show that the proposed method can reduce the number of selected keyframes while producing a comparable performance to the state-of-the-art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.