Abstract

Visual Servoing (VS) has been researched for over forty years, but real-world adoption has been slow. Challenges include camera calibration, lack of, or difficulty, integrating reliable real-time visual trackers and a lack of simple control interfaces through which robots can be controlled. Uncalibrated Visual Servoing (UVS) presents a viable approach to facilitate robot control and task definition in unstructured environments. Tasks are defined through visual features directly in image space. By estimating the full non-parametric image Jacobian no a-priori models or camera calibration is required. In practice UVS is highly dependant on camera positioning, visual tracker performance, and the underlying robot control. In this paper we explore theperformance of UVS with respect to these dependencies, both, in simulation, and with a physical robot. Through the use of ROS-UVS, our open source Uncalibrated Visual Servoing library, we hope that characterizing the behaviour of UVS will help facilitate adoption for new users and serve to showcase the features and practical applications of our library.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call