Abstract

In this paper, a general visual servoing structure for mobile robots is proposed to handle the situation that the target scene gets out of the camera view. Most existing visual servoing strategies are based on the assumption that images always share common feature points with the desired one during the servoing procedure, which actually cannot be guaranteed by the controller. To avoid such problems, simultaneous localization and mapping (SLAM) is introduced to visual servoing system, which contains the front-end for estimating the current pose and the back-end for optimizing the desired pose of the mobile robot. Meanwhile, compared with the traditional servoing system with artificial feature points, the scale of robot poses can be fixed by the map in the proposed scheme, which makes it applicable in natural scene. In addition, all position-based visual servoing controllers are implementable in the proposed servoing architecture. The servoing structure has been implemented on a nonholonomic mobile robot and experimental results are exhibited to illustrate the effectiveness and feasibility of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call