Abstract

Visual control of robots using vision system and cameras has appeared since 1980’s. Visual (image based) features such as points, lines and regions can be used to, for example, enable the alignment of a manipulator / gripping mechanism with an object. Hence, vision is a part of a control system where it provides feedback about the state of the environment. In general, this method involves the vision system cameras snapping images of the targetobject and the robotic end effector, analyzing and reporting a pose for the robot to achieve. Therefore, 'look and move' involves no real-time correction of robot path. This method is ideal for a wide array of applications that do not require real-time correction since it places much lighter demands on computational horsepower as well as communication bandwidth, thus having become feasible outside the laboratory. The obvious drawback is that if the part moves in between the look and move functions, the vision system will have no way of knowing this in reality this does not happen very often for fixture parts. Yet another drawback is lower accuracy; with the 'look and move' concept, the final accuracy of the calculated part pose is directly related to the accuracy of the 'hand-eye' calibration (offline calibration to relate camera space to robot space). If the calibration were erroneous so would be the calculation of the pose estimation part. A closed–loop control of a robot system usually consists of two intertwined processes: tracking pictures and control the robot’s end effector. Tracking pictures provides a continuous estimation and update of features during the robot or target-object motion. Based on this sensory input, a control sequence is generated. Y. Shirai and H. Inoue first described a novel method for 'visual control' of a robotic manipulator using a vision feedback loop in their paper. Gilbert describes an automatic rocket-tracking camera that keeps the target centered in the camera's image plane by means of pan/tilt controls (Gilbert et al., 1983). Weiss proposed the use of adaptive control for the non-linear time varying relationship between robot pose and image features in image-based servoing. Detailed simulations of image-based visual servoing are described for a variety of manipulator structures of 3-DOF (Webber &.Hollis, 1988).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.