Abstract

This paper reports some recent results from a study on directing humanoids in a multi-modal command language. A system which interprets userspsila messages in the language through microphones, visual and tactile sensors, and control buttons in real time has been developed and applied to small humanoids. The command language is based on a simple well-defined spoken language and non-verbal events detected using sensors and buttons. In some usability tests, subjects unfamiliar with the language were able to operate small humanoids and complete their tasks by talking to them, using gestures, touching them and pressing keypad keys without a long learning stage. Our system operating on PCs responded to multi-modal commands without significant delay. Multi-modal commands were more successful than spoken commands without non-verbal messages although some users needed a certain number of trials to adapt to multi-modal communications in our language.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call