Abstract

Purpose The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously in the visual servoing process. Design/methodology/approach By using reference, desired and current images, system errors are constructed by measurable signals that are obtained by decomposing Euclidean homographies. Subsequently, by taking the advantage of the concurrent learning framework, both historical and current system data are used to construct an adaptive updating mechanism for recovering the unknown feature depth. Then, the kinematic controller is designed for the mobile robot to achieve the visual servo trajectory tracking task. Lyapunov techniques and LaSalle’s invariance principle are used to prove that system errors and the depth estimation error converge to zero synchronously. Findings The concurrent learning-based visual servo tracking and identification technology is found to be reliable, accurate and efficient with both simulation and comparative experimental results. Both trajectory tracking and depth estimation errors converge to zero successfully. Originality/value On the basis of the concurrent learning framework, an adaptive control strategy is developed for the mobile robot to successfully identify the unknown scene depth while accomplishing the visual servo trajectory tracking task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call