Abstract

Visual servoing is a manipulation control strategy that precisely positions objects using imprecisely calibrated camera-lens-manipulator systems. In order to quickly and easily integrate sensor-based manipulation strategies such as visual servoing into robotic systems, a system framework and a task representation must exist which facilitates this integration. The framework must also be extendable so that obsolete sensor systems can be easily replaced or extended as new technologies become available. In this paper we present a framework for expectation-based visual servoing which visually guides tasks based on the expected visual appearance of the task. The appearance of the task is generated by a model of the environment that uses texture-mapped geometric models to represent objects. A system structure which facilitates the integration of various configurations of visual servoing systems is presented, as well as a hardware implementation of the proposed system and experimental results using a stereo camera system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call