Abstract

Neither of the classical visual servoing approaches, position-based and image-based, are completely satisfactory. In position-based visual servoing the trajectory of the robot is well stated, but the approach suffers mainly from the image features going out of the visual field of the cameras. On the other hand, image-based visual servoing has been found generally satisfactory and robust in the presence of camera and hand-eye calibration errors. However, in some cases, singularities and local minima may arise, and the robot can go further from its joint limits. This paper is a step towards the synthesis of both approaches with their particular advantages, i.e., the trajectory of the camera motion is predictable and the image features remain in the field of view of the camera. The basis is the introduction of three-dimensional information in the feature vector. Point depth and object pose produce useful behavior in the control of the camera. Using the task-function approach, we demonstrate the relationship between the velocity screw of the camera and the current and desired poses of the object in the camera frame. Camera calibration is assumed, at least coarsely. Experimental results on real robotic platforms illustrate the presented approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call