Abstract

Simultaneous localization and mapping (SLAM) becomes popular in the autonomous robot research community. SLAM requests a mobile robot, placed at an unknown location in an unknown environment, to incrementally build a consistent map of this environment while simultaneously determining its location within this map. The odometer data is helpful but the noisy error accumulates. Assistant with measurements from the other sensors, SLAM improves the accuracy by filtering. The extended Kalman filter is the most used method to feedback the error. In this thesis, a monocular vision based SLAM for a wheeled robot is proposed. A single web camera is the only measurement input to correct the odometer data. Extract features and match them based on the scale invariant feature transform (SIFT). Absence of the feature depth information is the key issue for the monocular vision in SLAM. The inverse depth parametrization method solves the problem by present the position in inverse coordinate with some reasonable assumptions. Convergence of the feature position is more easily achieved by the EKF due to the linearity. Combining the odometer data with the inverse depth method, this system provides an accurate map by tests of the indoor image sequence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.