Abstract

AbstractThis paper addresses the challenge of generating clear image frames and minimizing the loss of keyframes by a robot engaging in rapid large viewing angle motion. These issues often lead to detrimental consequences such as trajectory drifting and loss during the construction of curved motion trajectories. To tackle this, we proposed a novel visual simultaneous localization and mapping (SLAM) algorithm, TKO‐SLAM, which is based on time‐delay feature regression and keyframe position optimization. TKO‐SLAM uses a multiscale recurrent neural network to rectify object deformation and image motion smear. This network effectively repairs the time‐delay image features caused by the rapid movement of the robot, thereby enhancing visual clarity. Simultaneously, inspired by the keyframe selection strategy of the ORB‐SLAM3 algorithm, we introduced a grayscale motion‐based image processing method to supplement keyframes that may be omitted due to the robot's rapid large viewing angle motion. To further refine the algorithm, the time‐delay feature regression image keyframes and adjacent secondary keyframes were used as dual measurement constraints to optimize camera poses and restore robot trajectories. The results of experiments on the benchmark RGB‐D data set TUM and real‐world scenarios show that TKO‐SLAM algorithm achieves more than 10% better localization accuracy than the PKS‐SLAM algorithm in the rapid large viewing angle motion scenario, and has advantages over the SOTA algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call