Abstract
This paper presents a computational model of real - time task - oriented dialog skills . The model , termed Ymir, bridges multimodal perception and multimodal action and supports the creation of autonomous computer characters that afford full - duplex , real - time face - to - face interaction with a human . Ymir has been prototyped in software , and a humanoid created , called Gandalf, capable of fluid multimodal dialog . Ymir demonstrates several new ideas in the creation of communicative computer agents , including perceptual integration of multimodal events, distributed planning and decision making, an explicit handling of real time, and perceptuo-motor system layered and motor control with human characteristics. This paper describes the model's architecture and explains its main elements . Examples of implementation and performance are given , and the architecture's limitations and possibilities are discussed .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.