Abstract
This article proposes a Linear Visual Servoing (LVS)-based method of controlling the position and attitude of omnidirectional mobile robots. This article uses two markers to express their target position and attitude in binocular visual space coordinates, based on which new binocular visual space information which includes position and attitude angle information is defined. Binocular visual space information and the motion space of an omnidirectional mobile robot are linearly approximated, and, using the approximation matrix and the difference in the binocular visual space information between a target marker and a robot marker, the robot’s translational velocity and rotational velocity are generated. Since those are all generated based only on disparity information on an image, which is similar to how this is done in existing LVS, a camera angle is not required. Thus, the method is robust against calibration errors in camera angles, as is existing LVS. The effectiveness of the proposed method is confirmed by simulation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.