Abstract

The fusion of vision and inertial data become very popular in robotics and computer vision community presently due to the complementary nature of the two kind of sensing modalities, which can be exploited to many applications, such as VR, 3D and simultaneous localization and mapping (SLAM) in robotics. But most of the proposed fusion methods are implemented based on the filtering schemes, which make it unsuitable for large scale environment. In this paper, we apply the fusion of a stereovision system and IMU to address the SLAM issues, and aim to improve the accuracy and robustness of the results. A tightly coupled framework is adopted for the data association and a non-linear optimization backend is used to enhance the consistency of the map. Some strategies are exploited to reduce the computational complexity, such as pre-integration method, QR decomposition and key-frame based feature extraction. The DBoW-based loop-closure detection is integrated which provides the constraints for the backend non-linear optimization. Some experiments with open source dataset and the data collected by our intelligent vehicle are carried out and the results show that, compared with the existing monocular VINS and open-loop fusion methods, the proposed approach is outperformed in terms of accuracy and robustness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.