Abstract

Natural body gesturing and speech dialogue, is crucial for human-robot interaction (HRI) and human-robot symbiosis. Real interaction is not only with one-to-one communication but also among multiple people. We have therefore developed a system that can adjust gestures and facial expressions based on a speaker's location or situation for multi-party communication. By extending our already developed real-time gesture planning method, we propose a gesture adjustment suitable for human demand through motion parameterization and gaze motion planning, which allows communication through eye-to-eye contact. We implemented the proposed motion planning method on an android Actroid-SIT and we proposed to use a Key-Value Store to connect the components of our systems. The Key-Value Store is a high-speed and lightweight dictionary database with parallelism and scalability. We conducted multi-party HRI experiments for 1,662 subjects in total. In our HRI system, over 60% of subjects started speaking to the Actroid, and the residence time of their communication also became longer. In addition, we confirmed our system gave humans a more sophisticated impression of the Actroid.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call