Abstract
Actually, vision information can be used as sensory input in open-loop as well as in the closed-loop. However, the visual servoing approach is performed only inside the closed-loop robot control, because in the open-loop the vision sensor will represent the initial extraction of the features to generate directly the robot motion sequence and these features and motion could be off-line generated. On the contrast, closed-loop robot system uses the vision as real time sensor and it consists of two phases: tracking and control. Tracking provides a continuous estimation and update of features during the robot/object motion. Based on this information, a real time control loop will be generated. The main contribution of this work is a proposed visual servoing approach which will benefit from the images which are obtained by Kinect camera (RGB-D camera). The proposed visual servoing approach is called 4 × 2D visual servoing which combines the correspondent color and depth images to build two new images. Using these 4 images the control error signals will be calculated in order to track the objects. Firstly, this chapter will present all types of visual servoing then it will introduce the 4 × 2D visual servoing approach and the visible side coordinate system, after that it will illustrate the concept of the proposed approach and how the error signal will be calculated. In addition to that, this approach proposes a coordinate system which is called visible side coordinate system.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.