Abstract

We present a novel low-cost visual odometry method of estimating the ego-motion (self-motion) for ground vehicles by detecting the changes that motion induces on the images. Different from traditional localization methods that use differential global positioning system (GPS), precise inertial measurement unit (IMU) or 3D Lidar, the proposed method only leverage data from inexpensive visual sensors of forward and backward onboard cameras. Starting with the spatial-temporal synchronization, the scale factor of backward monocular visual odometry was estimated based on the MSE optimization method in a sliding window. Then, in trajectory estimation, an improved two-layers Kalman filter was proposed including orientation fusion and position fusion . Where, in the orientation fusion step, we utilized the trajectory error space represented by unit quaternion as the state of the filter. The resulting system enables high-accuracy, low-cost ego-pose estimation, along with providing robustness capability of handing camera module degradation by automatic reduce the confidence of failed sensor in the fusion pipeline. Therefore, it can operate in the presence of complex and highly dynamic motion such as enter-in-and-out tunnel entrance, texture-less, illumination change environments, bumpy road and even one of the cameras fails. The experiments carried out in this paper have proved that our algorithm can achieve the best performance on evaluation indexes of average in distance (AED), average in X direction (AEX), average in Y direction (AEY), and root mean square error (RMSE) compared to other state-of-the-art algorithms, which indicates that the output results of our approach is superior to other methods.

Highlights

  • This paper aims at developing a visual fusion approach for online ego-motion estimation with the data from onboard forward and backward cameras

  • The results show that our method can achieve the best performance on average in distance (AED), average in X direction (AEX) and average in Y direction (AEY) among all the methods, which indicates that the output results of our method are the most stable compared with other methods [33,44,46]

  • The results show that our method can achieve the best performance on evaluation indexes of AED, AEX, AEY and root mean square error (RMSE) among all the methods, which indicates that the output results of our method are most accuracy compared with other methods

Read more

Summary

Introduction

This paper aims at developing a visual fusion approach for online ego-motion estimation with the data from onboard forward and backward cameras. In many real-world applications, the estimation of egomotion and localization is a pivot of major vision-based navigation system especially for autonomous ground vehicle and robotics [2,3,4], since it forms the basis of subsequent scene understanding and vehicle control [5]. Ego-motion estimation in vehicles and robots is fundamental as it is usually the pre-requisite for higher-layer tasks, such as robot-based surveillance, autonomous navigation, path planning, for example, References [6,7]. A vision-based odometry system, compared to a traditional wheel-based or satellites-based localization system, has the advantages of an impervious character to inherent sensor inefficacies [8,9] (e.g., wheel encoder error because of uneven, slippery terrain or other adverse conditions) and can be used in a GPS-denied area [10,11] (e.g., underwater and tunnels in urban environments.) The proposed approach utilizes only visual perception cameras with lightweight, high robustness and low-cost characters

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.