Abstract

We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.

Highlights

  • A major challenge in modern robotics is the design of socially intelligent robots that can interact or cooperate with people in their daily tasks in a human-like way

  • We have developed a cognitive control architecture for human-robot joint action that integrates action simulation, goal inference, error detection and complementary action selection (Bicho, Erlhagen, Louro, & Costa e Silva, 2011; Bicho, Erlhagen, Louro, Costa e Silva, Silva, & Hipolito, 2011), based on the neurocognitive mechanisms underlying human joint action (Bekkering et al, 2009)

  • The robot uses speech to communicate to the human partner the outcome of the goal inference and decision making processes implemented in the dynamic neural field model

Read more

Summary

Introduction

A major challenge in modern robotics is the design of socially intelligent robots that can interact or cooperate with people in their daily tasks in a human-like way. This is consistent with the modeling study by Grecucci, Cooper, and Rumiati (2007), who proposed a computation model of action resonance and its modulation by emotional stimulation, based on the assumption that aversive emotional states enhance the processing of events This way, the robot is fully alert to all types of errors that can occur during the execution of the task, being able to anticipate them, and act before they occur. These connections implement the idea that perceived emotions play an important role in an early stage, during decision making and action preparation (AEL layer) of a complementary action, and the latter may affect the execution at the kinematics level (motor control) This is motivated by recent studies in neuroscience by Ferri, Campione, Dalla Volta, Gianelli, and Gentilucci (2010); Ferri, Stoianov, et al (2010), having investigated the link between emotion perception and action planning & execution within a social context. They have demonstrated that assisting an actor with a fearful expression requires more smooth/slow movements, compared to assisting an actor with a positive emotional (e.g. Happy) state

Dynamical neural fields as a theoretical framework for the implementation
Setup of the human-robot experiments
Results
Experiment 1
Experiment 2
Experiment 3
Experiment 4
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call