Abstract
Abstract. An important factor that reduced the accuracy of motion trajectories in existing VSLAM (Visual Simultaneous Localization and Mapping) systems is the poor estimation of the position pose of the vision odometer. The existing methods generate many incorrect matches during the feature matching process, resulting in low computational accuracy of rotations and translations between cameras, which further leads to a reduction in the robustness of the overall system. In addition, the sparse feature point maps do not provide a detailed description of the surrounding environment, which makes it difficult for the devices equipped with VSLAM systems to perform advanced tasks such as navigation, path planning and human-computer interaction. To address the accuracy problem, we select the set of matches from existing feature matching algorithms based on the motion consistency constraint and use a random sampling consistency algorithm to obtain the best quality matches from the selected samples for computing the geometric transformation model and estimating the current pose. To address the problem of sparse map points, we use the depth information from the RGB-D or Stereo camera to build a dense map module to ensure that information about the surrounding environment is recorded as a point cloud, which provides data support for the implementation of advanced tasks of the device.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.