Abstract

One of the most desirable characteristics of a robotic manipulator is its flexibility. Flexibility and adaptability can be achieved by incorporating vision and generally, sensory information in the feedback loop. Our research introduces a framework called controlled active vision for efficient integration of the vision sensor in the feedback loop. This framework was applied to the problem of robotic visual tracking and servoing, and the results were very promising. Full 3-D robotic visual tracking was achieved at rates of 30 Hz. Most importantly, the tracking was successful even under the assumption of poor calibration of the eye-in-hand system. This paper extends this framework to other problems of sensor-based robotics, such as the derivation of depth maps from controlled motion; the vision-assisted grasping; the active calibration of the system robot-camera; and the computation of the relative pose of the target with respect to the camera. We address these problems by combining adaptive control techniques with computer vision algorithms. The paper concludes with a discussion on several relative issues such as the stability and robustness of the proposed algorithms and the problem of incorporating stereo information in the existing algorithms in order to increase the accuracy of the estimated depth.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.