Abstract

Aiming at the problems of the existing robot vision SLAM(Simultaneous Localization and Mapping), such as the small number of feature point extraction and the easy loss of keyframes, which leads to the problem of trajectory deviation, many existing visual SLAM methods based on keyframes only propose a holistic system solution in the scheme, no detailed research is carried out on the feature extraction of the front-end visual odometry. In this paper, an affine transformation based ORB feature extraction method(Affine-ORB) is used and applied to existing robot vision SLAM methods, and an improved visual SLAM method is proposed. In the proposed SLAM, we first use the BRISK method for feature point description; Secondly, the mathematical method of affine transformation is introduced into the ORB feature extraction; Finally, the sample is normalized and the image is restored. By requiring a handheld camera to take the vision SLAM experiment, it is judged that the keyframe loss rate of the proposed algorithm is significantly reduced. Through evaluation experiments with TUM, KITTI and EUROC datasets, the keyframe extraction effect and positioning accuracy of the SLAM algorithm set out in the present paper are compared with PTAM, LSD-SLAM and ORB-SLAM, respectively. The frame loss rate of feature extraction SLAM based on affine transformation has decreased from 0.5 % to 0.2 %, and the root mean square error (RMSE) of the running trajectory has been drastically reduced. That means the key frame extraction speed is faster, the keyframe loss rate is lower at the same moving speed, and the positioning accuracy is higher

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call