Abstract

This paper presents an image-based visual servoing strategy for the autonomous navigation of a mobile holonomic robot from a current towards a desired pose, specified only through a current and a desired image acquired by the on-board central catadioptric camera. This kind of vision sensor combines lenses and mirrors to enlarge the field of view. The proposed visual servoing does not require any metrical information about the three-dimensional viewed scene and is mainly based on a novel geometrical property, the auto-epipolar condition, which occurs when two catadioptric views (current and desired) undergo a pure translation. This condition can be detected in real time in the image domain by observing when a set of so-called disparity conics have a common intersection. The auto-epipolar condition and the pixel distances between the current and target image features are used to design the image-based control law. Lyapunov-based stability analysis and simulation results demonstrate the parametric robustness of the proposed method. Experimental results are presented to show the applicability of our visual servoing in a real context.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.