Abstract

Joint attention is one of the most important cognitive functions for the emergence of communication not only between humans, but also between humans and robots. In previous work, we have demonstrated how a robot can acquire primary joint attention behavior (gaze following) without external evaluation. However, this method needs the human to tell the robot when to shift its gaze. This paper presents a method that does not need such a constraint by introducing an attention selector based on a measure consisting of saliencies of object features and motion cues. In order to realize natural interaction, a self-organizing map for real-time face pattern separation and contingency learning for gaze following without external evaluation are utilized. The attention selector controls the robot gaze to switch often from the human face to an object and vice versa, and pairs of a face pattern and a gaze motor command are input to the contingency learning. The motion cues are expected to reduce the number of incorrect training data pairs due to the asynchronous interaction that affects the convergence of the contingency learning. The experimental result shows that gaze shift utilizing motion cues enables a robot to synchronize its own motion with human motion and to learn joint attention efficiently in about 20 min.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call