Abstract

We consider mobile service robots that carry out tasks with, for, and around humans in their environments. Speech combined with on-screen display are common mechanisms for autonomous robots to communicate with humans, but such communication modalities may fail for mobile robots due to spatio-temporal limitations. To enable a better human understanding of the robot given its mobility and autonomous task performance, we introduce the use of lights to reveal the dynamic robot state. We contribute expressive lights as a primary modality for the robot to communicate to humans useful robot state information. Such lights are persistent, non-invasive, and visible at a distance, unlike other existing modalities. Current programmable light arrays provide a very large animation space, which we address by introducing a finite set of parametrized signal shapes while still maintaining the needed animation design flexibility. We present a formalism for light animation control and an architecture to map the representation of robot state to the parametrized light animation space. The mapping generalizes to multiple light strips and even other expression modalities. We demonstrate our approach on CoBot, a mobile multi-floor service robot, and evaluate its validity through several user studies. Our results show that carefully designed expressive lights on a mobile robot help humans better understand robot states and actions and can have a desirable impact on a collaborative human–robot behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call