Abstract

This paper presents a practical approach to solve the simultaneous localization and mapping (SLAM) problem for autonomous mobile platforms by using natural visual landmarks obtained from an stereoscopic camera. It is an attempt to depart from traditional sensors such as laser rangefinders in order to gain the many benefits of nature-inspired information-rich 3D vision sensors. Whilst this makes the system fully observable in that the sensor provide enough information (range and bearing) to compute the full 2D estate of the observed landmarks from a single position, it is also true that depth information is difficult to rely on, particularly on measurements beyond a few meters (in fact the full 3D estate is observable, but here robot motion is constrained to 2D and only the 2D problem is considered). The work presented here is an attempt to overcome such a drawback by tackling the problem from a partially measurable SLAM perspective in that only landmark bearing from one of the cameras is employed in the fusion estimation. Range information estimates from the stereo pair is only used during map building in the landmark initialization phase in order to provide a reasonably accurate initial estimate. An additional benefit of the approach presented here lies in the data association aspect of SLAM. The availability of powerful feature extraction algorithms from the vision community, such as SIFT, permits a more flexible SLAM implementation separated from feature representation, extraction and matching, essentially carrying out matching with minimal recourse to geometry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call