Abstract

Efficient localization plays a significant role in mobile autonomous robots’ navigation systems. Traditional visual simultaneous localization systems based on point feature matching suffer from two shortcomings. First one is that the method of tracking features is not robust for environments with frequent changes in brightness. Another one is the large of consecutive visual keyframes can consume expensive computational and storage resources in complex environments. To solve these problems, an end-to-end real-time six degrees of freedom object pose estimation algorithm is proposed to solve the robust and efficient challenges through a deep learning model. First, preprocessing operations such as cropping, averaging, and timestamp alignment are performed on datasets to reduce computational cost and time. Second, the processed dataset is fed into our neural network model to extract the most effective features for matching. Finally, the robot’s current 3D translation and 4D angle information are predicted and output to achieve an end-to-end localization system. A broad range of experiments are performed on both indoor and outdoor datasets. The experimental results demonstrate that the translation and orientation accuracy in outdoor scenes improved by 32.9% and 31.4%, respectively. The average improvement of localization accuracy in indoor scenes is 38.4%, and the angle improvement is 13.1%. Moreover, the effectiveness of predicting the global motion trajectories of sequential images algorithm has been verified and is superior to other convolutional neural network methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call