Abstract
We describe a method for the visual control of a robotic system which does not require the formulation of an explicit calibration between image coordinates and the world coordinates. By extracting control information directly from the image, we free our technique from the errors normally associated with a fixed calibration. We attach a camera system to a robot such that the camera system and the robot's gripper rotate simultaneously. As the camera system rotates about the gripper's rotational axis, the circular path traced out by a point-like feature projects to an elliptical path in image space. We gather the projected feature points over part of a rotation and fit the gathered data to an ellipse. The distance from the rotational axis to the feature point in world space is proportional to the size of the generated ellipse. As the rotational axis gets closer to the feature, the feature's projected path will form smaller and smaller ellipses. When the rotational axis is directly above the object, the trajectory degenerates from an ellipse to a single point. We demonstrate the efficacy of the algorithm on the peg-in-hole problem.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.