Abstract

Visual SLAM can be mainly divided into direct method and feature-based method, and these two methods develop relatively independently. In recent years, feature-based SLAM systems have been significantly improved by introducing more robust features, effective matching and optimization frameworks, or other sensors. The introduction of semantic information also promotes the development of direct methods. However, visual SLAM usually assumes that the system satisfies the constant velocity assumption. This assumption may lead to a poor initial pose so that the subsequent optimization falls into a local minimal. Meanwhile, large field of view changes often lead to an increase in the error of feature matching for feature-based method and large illumination changes often lead to an increase of photometric error for direct method, thus negatively influencing SLAM systems. In this paper, we mainly target at feature-based method. In detail, we focus on the number and quality of feature matches, as well as the accuracy of the initial pose, thus an interpolation mechanism for SLAM is proposed. Specifically, we introduce the interpolation network originally used to increase the number of video frames into visual SLAM. First, we point out that the traditional interpolation network evaluation metrics are not suitable for the SLAM systems, and we provide the corresponding evaluation metric. Secondly, we verify that it works both for hand-crafted and deep learning features. Thirdly, in order to verify the effectiveness and transferability of our method, we also apply our method to SLAM systems based on direct method, which proves that our method is also applicable to the direct method. Fourthly, we point out that the interpolation network effectively slows down the pose transformation of the SLAM system by inserting an intermediate frame between the previous frame and the current frame, so that the system can obtain a better initial pose based on the constant velocity assumption. This can also explain why visual-inertial systems can effectively improve the performance of visual SLAM. Finally, to ensure the efficiency of the SLAM system, we provide a turning detection module and propose a method to interpolate only at turnings. Extensive experiments and analyses verify the effectiveness and transferability of the proposed system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.