Abstract

When artificial agents interact and cooperate with other agents, either human or artificial, they need to recognize others' actions and infer their hidden intentions from the sole observation of their surface level movements. Indeed, action and intention understanding in humans is believed to facilitate a number of social interactions and is supported by a complex neural substrate (i.e. the mirror neuron system). Implementation of such mechanisms in artificial agents would pave the route to the development of a vast range of advanced cognitive abilities, such as social interaction, adaptation, and learning by imitation, just to name a few. We present a first step towards a fully-fledged intention recognition system by enabling an artificial agent to internally represent action patterns, and to subsequently use such representations to recognize - and possibly to predict and anticipate - behaviors performed by others. We investigate a biologically-inspired approach by adopting the formalism of Associative Self-Organizing Maps (A-SOMs), an extension of the well-known Self-Organizing Maps. The A-SOM learns to associate its activities with different inputs over time, where inputs are high-dimensional and noisy observations of others' actions. The A-SOM maps actions to sequences of activations in a dimensionally reduced topological space, where each centre of activation provides a prototypical and iconic representation of the action fragment. We present preliminary experiments of action recognition task on a publicly available database of thirteen commonly encountered actions with promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call