Abstract
Gaze control plays a crucial role in the context of conversational robots involving multiple participants. Particularly, gaze aversion is not only important for improving human likeness but is also strongly associated with the expression of a robot's personality and should be accounted for dialogue interactions. While previous studies on gaze models considering multi-party conversations have been conducted, those considering gaze aversion are limited. In this study, we developed a gaze motion generation model, focusing on three features: (1) the gaze target (towards dialogue partners or gaze aversion), (2) the gaze duration, and (3) the eyeball direction during gaze aversion. We extracted the gaze model parameters of two individuals with distinct extraversion from a multimodal three-party dialogue database and created a model that can modulate extraversion expression by setting a gaze aversion ratio parameter. We implemented this model on an android robot, which can reproduce human-like gaze behaviors. Video-based experiments have been conducted where participants watched videos of the robot talking with two humans, replicating the three-party dialogue scenarios used in this study, and evaluated the impressions on the robot's extraversion. The results demonstrated that by controlling gaze aversion parameters, the robot could exhibit various levels of extraversion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.