Abstract

Monocular vision sensors are often affected by the rapid direction change in load platform and violent illumination change when the mobile device moves autonomously with high maneuverability. The images collected by the visual sensor will also have a lot of dynamic blur, which together with the weak texture environment reduces the continuity and accuracy of the visual autonomous navigation system. To enhance the stability of the system, in the letter, we propose a vision-led multisource data fusion navigation algorithm. The system combines the visual information for trajectory estimation, adds the inertial measurement unit (IMU) measurement information to the sliding window for optimization, and finally uses the global navigation satellite system (GNSS) data as a reconstraint condition through factor graph optimization to further optimize the trajectory accuracy. Experiments on the public datasets containing a variety of different scene categories show that the trajectory tracking results generated by our algorithm are more complete and stable and can better meet the system’s autonomous navigation requirements.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.