Abstract
This paper presents a new method for visual homing to be used on a robot moving on the ground plane. A relevant issue in vision-based navigation is the field-of-view constraints of conventional cameras. We overcome this problem by means of omnidirectional vision and we propose a vision-based homing control scheme that relies on the 1D trifocal tensor. The technique employs a reference set of images of the environment previously acquired at different locations and the images taken by the robot during its motion. In order to take advantage of the qualities of omnidirectional vision, we define a purely angle-based approach, without requiring any distance information. This approach, taking the planar motion constraint into account, motivates the use of the 1D trifocal tensor. In particular, the additional geometric constraints enforced by the tensor improve the robustness of the method in the presence of mismatches. The interest of our proposal is that the designed control scheme computes the robot velocities only from angular information, being this very precise information; in addition, we present a procedure that computes the angular relations between all the views even if they are not directly related by feature matches. The feasibility of the proposed approach is supported by the stability analysis and the results from simulations and experiments with real images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.