Abstract
In this paper, the problem of controlling the relative pose between a robot camera and a rigid object of interest is solved using nonlinear system theory. The camera-object visual interaction model is derived in the state space form in terms of image points. Then it is shown that the image-based visual system is completely controllable. Moreover, as a consequence of nonlinear controllability, global uniform asymptotic stability of the image reference set-point is formally proved using Lyapunov's direct method. Active contours are used to track the 2-D projection of the visible object's surface in the image plane, and 3-D estimation procedure based on prediction errors is used to cope with the unknown depth of the object. Experimental results with a 6-DOF robot manipulator endowed with a camera on its wrist validate the framework.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.