This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.
Read full abstract