Abstract

This chapter introduces a software architecture to build intelligent multimodal assistants. The architecture consists of three basic components: a meme media system, an inference system, and an embodied interface agent system that makes multimodal presentations available to users. In an experimental implementation of the architecture, the author uses three components as the basic framework: Intelligent Pad for a meme media system, Prolog for a logic programming system, and Multimodal Presentation Markup Language (MPML) for controlling an interface agent system. The experimental implementation shows how character agents are defined in a simple declarative manner using logic programming on meme media objects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call