Abstract

Vision-based systems can estimate the vehicle’s positions and attitude with a low cost and simple implementation, but the performance is very sensitive to environmental conditions. Moreover, estimation errors are accumulated without a bound since visual odometry is a dead-reckoning process. To improve the robustness to environmental conditions, vision-based systems can be augmented with inertial sensors, and the loop closing technique can be applied to reduce the drift. However, only with on-board sensors, vehicle’s poses can only be estimated in a local navigation frame, which is randomly defined for each mission. To obtain globally-referred poses, absolute position estimates obtained with GNSS can be fused with on-board measurements (obtained with either vision-only or visual-inertial odometry). However, in many cases (e.g. urban canyons, indoor environments), GNSS-based positioning is unreliable or entirely unavailable due to signal interruptions and blocking, while we can still obtain ranging links from various sources, such as signals of opportunity or low cost radio-based ranging modules. We propose a graph-based data fusion method of the on-board odometry data and ranging measurements to mitigate pose drifts in environments where GNSS-based positioning is unavailable. The proposed algorithm is evaluated both with synthetic and real data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.