Abstract

Event Abstract Back to Event Scene Representation Based on Dynamic Field Theory: From Human to Machine Stephan K. Zibner1*, Christian Faubel1, Ioannis Iossifidis1 and Gregor Schöner1 1 Ruhr-Universität Bochum, Institut für Neuroinformatik, Germany A typical human-robot cooperation task puts an autonomous robot in the role of an assistant for the human user. Fulfilling requests of the human user like “hand me the red screwdriver” demands an internal representation of the spatial layout of the scene as well as an understanding of labels and identifiers like “red”, associated with the spatial information. In addition, the representation of the scene must be updated according to ongoing changes in the outside world.Studies trying to comprehend the nature and extent of the internal representation of humans [1] hint that we keep a limited, non-pictorial representation of our perception in memory. Experimental studies on human visual working memory using a change detection paradigm [2] define space as being the key to bind multiple features of an object. Dynamic Field Theory, which is a theory of embodied cognition[4], provides the process models to explain such results. The models are based on Dynamic Neural Fields, which are a model of neural activity in the human cortex related to metric spaces.Applying the theory in the paradigm of autonomous robots provides a guideline for designing a neurally plausible robotic scene representation.We approach the problems of building up, maintaining, and updating a robotic scene representation with an architecture built from a set of Dynamic Neural Fields [5]. Core of this architecture are three-dimensional Dynamic Neural Fields, which, inspired by [2], dynamically associate two-dimensional object locations in an allocentric reference frame with extracted low-dimensional object features like color. The unit of information representing an association in these fields is localized supra-threshold activity, called a peak, that both gives a detection decision on input and an estimate of continuous parameters like space or color hue represented by the field’s dimensions. Through the architectural connections, the three-dimensional fields are sequentially filled up with associative peaks as soon as the robot perceives a scene. Using the stability characteristics of fields and a steady coupling to current camera input, changes in object positions are tracked even for multiple moving objects, objects are memorized when they get out of view, and associations are removed automatically once an object is removed from the scene.The resulting application is tested on the robotic platform CoRA (Cooperative Robotic Assistant).The resulting peaks of an autonomous scanning sequence can be successfully held in the three-dimensional associative Dynamic Neural Field, even when objects get out of view. Here, the application shows a capacity limit of four to five objects that can be kept concurrently in the scene representation. If multiple objects are moved within the robot’s field of view, a smaller amount of objects can be tracked simultaneously. The capacity limit is reduced to three objects for multiobject tracking, bearing in mind that not only the spatial position of the objects is updated correctly, but the stored feature associations are carried along as well. The decrease of capacity between static and moving objects is also observable in the human counterpart [3].

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.