Abstract

This paper presents new approach illustrating robust visual servoing based on global visual features: random distribution of limited set of pixels luminance. Our approach aims to improve the real-time performance of the visual servoing scheme. In fact, the use of our new features reduces the computation time of the visual servoing task and removes matching and tracking process. Concerning the control scheme, we present new approach based on the second-order error-dynamics instead of the first-order error-dynamics. The main goal of this approach is to generate new control law able to improve mobile robot robustness with respect to kinematic modelling errors during visual servoing scheme. The new control law ensures the convergence of the mobile robot to its desired pose even in the presence of modelling errors. Experimental results are presented to validate our approaches and to demonstrate its efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call