Abstract

It is now well known that increasing the number of features maintained in the mapping process of the monocular SLAM improves the accuracy of the outcome. This, however, increases the state dimension and the associated computational cost. This paper investigates and evaluates the improvement on SLAM results by exploiting camera motion information. For a camera mounted on a vehicle, its motion is subject to the vehicle motion model. The work of this paper shows that by introducing relative pose constraints calculated from image points by considering the underlying vehicle motion model (for example the non-holonomic vehicle motion model), it is possible to incorporate vehicle motion information into the system and achieve even more accurate SLAM results than maintaining all extracted features in the map. It is demonstrated that in this process, the state dimension is not increased, and the sparse structure of the SLAM problem is maintained. So the underlying sparseness in the SLAM problem structure can still be exploited for computational efficiency. Simulation and experiment results are presented to demonstrate the relative merits of incorporating vehicle motion information for motion estimation and mapping.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.