Abstract

Most robotic vision algorithms are proposed by envisaging robots operating in structured environments where the world is assumed to be rigid. These algorithms fail to provide optimum behavior when the robot has to be controlled with respect to active non-rigid targets. This paper presents a new framework for visual servoing that accomplishes the robot positioning task even in non-rigid environments. We introduce a space-time representation scheme for modeling the deformations of a non-rigid object and propose a model-free hybrid approach that exploits the two-view geometry induced by the space-time features to perform the servoing task. Our formulation can address a variety of non-rigid motions and can tackle large camera displacements without being affected by the degeneracies in the task space. Experimental results validate our approach and demonstrate the robust and stable behavior

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call