Abstract
Visual Simultaneous Localization and Mapping (V-SLAM) is pivotal for precise positioning and mapping. However, visual data from crowd-sourced datasets often contains deficiencies that may lead to positioning errors. Despite existing optimization techniques, current algorithms do not adequately adapt to varied data in vehicle driving scenarios. To address this gap, this study introduces a novel SLAM framework (SLG-SLAM). This framework refines trajectories by integrating semantic information, laser point cloud, and global navigation satellite system (GNSS) data into V-SLAM. Initial trajectory estimates are made after filtering out dynamic targets and are subsequently refined with matched laser point clouds, then corrected for scale and direction using GNSS. The efficacy of this approach is assessed using four public datasets and one self-collected dataset, showing significant enhancements across all datasets. The proposed method reduces the mean absolute trajectory error by 43.50% on the KITTI dataset and 14.91% on the MVE dataset compared to the baseline. Unlike the baseline, which fails on three other datasets, the proposed method successfully performs localization and mapping. Additionally, compared to three other single-source methods (DynaSLAM, MCL, MVSLAM), the proposed method consistently outperforms, demonstrating its superior adaptability and effectiveness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Applied Earth Observation and Geoinformation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.