Abstract

This paper presents a pose estimation method based on a 3D camera - the SwissRanger SR4000. The proposed method estimates the camera's ego-motion by using intensity and range data produced by the camera. It detects the SIFT (Scale- Invariant Feature Transform) features in one intensity image and match them to that in the next intensity image. The resulting 3D data point pairs are used to compute the least-square rotation and translation matrices, from which the attitude and position changes between the two image frames are determined. The method uses feature descriptors to perform feature matching. It works well with large image motion between two frames without the need of spatial correlation search. Due to the SR4000's consistent accuracy in depth measurement, the proposed method may achieve a better pose estimation accuracy than a stereovision-based approach. Another advantage of the proposed method is that the range data of the SR4000 is complete and therefore can be used for obstacle avoidance/negotiation. This makes it possible to navigate a mobile robot by using a single perception sensor. In this paper, we will validate the idea of the pose estimation method and characterize the method's pose estimation performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call