Abstract

Monocular vision sensors are often affected by the rapid direction change in load platform and violent illumination change when the mobile device moves autonomously with high maneuverability. The images collected by the visual sensor will also have a lot of dynamic blur, which together with the weak texture environment reduces the continuity and accuracy of the visual autonomous navigation system. To enhance the stability of the system, in the letter, we propose a vision-led multisource data fusion navigation algorithm. The system combines the visual information for trajectory estimation, adds the inertial measurement unit (IMU) measurement information to the sliding window for optimization, and finally uses the global navigation satellite system (GNSS) data as a reconstraint condition through factor graph optimization to further optimize the trajectory accuracy. Experiments on the public datasets containing a variety of different scene categories show that the trajectory tracking results generated by our algorithm are more complete and stable and can better meet the system’s autonomous navigation requirements.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call