Abstract
Abstract Visual simultaneous localization and mapping (VSLAM) is becoming increasingly popular in research and industry as a solution for mapping an unknown environment with moving cameras. However, classic methods such as the Extended Kalman Filter (EKF)-based VSLAM have two significant limitations: First, their robustness and accuracy drop dramatically when low frame rate cameras are used or sudden changes in camera velocity occur. Second, their dynamic models are expensive to build, or are too simple to simulate complex movements. In this paper, a novel VSLAM approach called conditional simultaneous localization and mapping (C-SLAM) is proposed in which camera state transition is derived from image data using optical flow constraints and epipolar geometry in the prediction stage. This improvement not only increases prediction accuracy but also replaces commonly used predefined dynamic models which require additional computation. Compared to classic VSLAM approaches, C-SLAM performs more accurately in prediction and has high computational efficiency, especially under conditions such as abrupt changes in camera velocity or low camera frame rate. Such advantages are supported by the experimental results and analysis presented in this paper.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.