Abstract

Abstract This paper reformulates image-based visual servoing as a constraint-based robot task, in order to integrate it seamlessly with other task constraints in image space, in Cartesian space, in the joint space of the robot, or in the “image space” of any other sensor (e. g. force, distance). This approach allows us to fuse various kinds of sensor data. The integration takes place via the specification of generic “feature coordinates”, defined in the different task spaces. Independent control loops are defined to control the individual feature coordinate setpoints, in each of these task spaces. The outputs of the control loops are instantaneously combined into joint velocity setpoints for a velocity-controlled robot that executes the task. The paper includes experimental results for different application scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call