Gaze control plays a crucial role in the context of conversational robots involving multiple participants. Particularly, gaze aversion is not only important for improving human likeness but is also strongly associated with the expression of a robot's personality and should be accounted for dialogue interactions. While previous studies on gaze models considering multi-party conversations have been conducted, those considering gaze aversion are limited. In this study, we developed a gaze motion generation model, focusing on three features: (1) the gaze target (towards dialogue partners or gaze aversion), (2) the gaze duration, and (3) the eyeball direction during gaze aversion. We extracted the gaze model parameters of two individuals with distinct extraversion from a multimodal three-party dialogue database and created a model that can modulate extraversion expression by setting a gaze aversion ratio parameter. We implemented this model on an android robot, which can reproduce human-like gaze behaviors. Video-based experiments have been conducted where participants watched videos of the robot talking with two humans, replicating the three-party dialogue scenarios used in this study, and evaluated the impressions on the robot's extraversion. The results demonstrated that by controlling gaze aversion parameters, the robot could exhibit various levels of extraversion.
Read full abstract