This paper presents a computational model of real - time task - oriented dialog skills . The model , termed Ymir, bridges multimodal perception and multimodal action and supports the creation of autonomous computer characters that afford full - duplex , real - time face - to - face interaction with a human . Ymir has been prototyped in software , and a humanoid created , called Gandalf, capable of fluid multimodal dialog . Ymir demonstrates several new ideas in the creation of communicative computer agents , including perceptual integration of multimodal events, distributed planning and decision making, an explicit handling of real time, and perceptuo-motor system layered and motor control with human characteristics. This paper describes the model's architecture and explains its main elements . Examples of implementation and performance are given , and the architecture's limitations and possibilities are discussed .
Read full abstract