Abstract

The direct use of the entire photometric image information as dense features for visual servoing brings several advantages. First, it does not require any feature detection, matching, or tracking process. Thanks to the redundancy of visual information, the precision at convergence is highly accurate. However, the corresponding highly nonlinear cost function reduces the convergence domain. In this paper, we propose visual servoing based on the analytical formulation of Gaussian mixtures to enlarge the convergence domain. Pixels are represented by two-dimensional Gaussian functions that denote a “power of attraction.” In addition to the control of the camera velocities during the servoing, we also optimize the Gaussian spreads allowing the camera to precisely converge to a desired pose even from a far initial one. Simulations show that our approach outperforms the state of the art and real experiments show the effectiveness, robustness, and accuracy of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call