Abstract

We introduce a robotic-vision system which is able to extract object representations autonomously utilising a tight interaction of visual perception and robotic action within a perception action cycle [Ecological Psychology 4 (1992) 121; Algebraic Frames for the Perception and Action Cycle, 1997, 1]. Controlled movement of the object grasped by the robot enables us to compute the transformations of entities which are used to represent aspects of objects and to find correspondences of entities within an image sequence. A general accumulation scheme allows to acquire robust information from partly missing information extracted from single frames of an image sequence. Here we use this scheme with a preprocessing stage in which 3D-line segments are extracted from stereo images. However, the accumulation scheme can be used with any kind of preprocessing as long as the entities used to represent objects can be brought to correspondence by certain equivalence relations such as 'rigid body motion'. We show that an accumulated representation can be applied within a tracking algorithm. The accumulation scheme is an important module of a vision based robot system on which we are currently working. In this system, objects are planned to be represented by different visual and tactile entities. The object representations are going to be learned autonomously. We discuss the accumulation scheme in the context of this project.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call