Abstract

This paper describes a system that combines stereo vision with a 5-DOF robotic manipulator, to enable it to locate and reach for objects in an unstructured environment. Our system uses an affine stereo algorithm, a simple but robust approximation to the geometry of stereo vision, to estimate positions and surface orientations. It can be calibrated very easily with just four reference points. These are defined by the robot itself, moving the gripper to four known positions ( self-calibration). The inevitable small errors are corrected by a feedback mechanism which implements image-based control of the gripper's position and orientation. Integral to this feedback mechanism is the use of affine active contour models which track the real-time motion of the gripper across the two images. Experiments show the system to be remarkably immune to unexpected translations and rotations of the cameras and changes of focal length — even after it has ‘calibrated’ itself.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.