Abstract
An important criterion for many intelligent user interfaces is that the interface be multimodal, that is, allow the user to interact with the system using a variety of different input channels. In addition to user interface interactions per se, the system may need to process input from multiple channels to make decisions in response to interface interactions or for other related purposes.The multimodal event parsing system described in our paper has been implemented in a working system called CERA, the Complex Event Recognition Architecture. CERA, developed under contract with NASA, has been used to identify complex events across multiple sensor channels in an advanced life support system demonstration project.We will demonstrate:The CERA event recognition language,The CERA event recognition engine at work,A custom development environment for writing and debugging CERA event recognizers, Visualization tools for complex event display,Integrating CERA with various toolkits and projects.The CERA event recognition engine is written in Common Lisp [1] and has a custom development environment with visualization tools based within the Eclipse extensible IDE [2]. This combination provides an easy to use development environment that can be used remotely, while maintaining the interactive flexibility of Lisp.As well as being a stand-alone event recognition system, CERA has also been tightly integrated with the RAP execution system [3] and the I/NET Conversational Interface system for dialogue management [4]. This combination allows the creation of human/computer interfaces for dynamic systems that make use of natural language, multi-channel controls and sensors, and other available physical context.Our demonstration will consist of a number of different components designed to illustrate the various aspects of CERA and our approach to building multimodal interfaces. The first demonstration will show CERA processing and combining events from multiple input streams, including examples from the NASA advanced life support system domain. The emphasis of this demonstration will be to show how event recognizers are built and how they work in practice.Our second demonstration will illustrate the CERA IDE and visualization tools. These tools allow the programming of a remote CERA system and the monitoring and debugging of its operation. Techniques for monitoring recognition progress and examining partial recognition state will be examined.Finally, we will demonstrate a more complex interface that combines natural language input with various non-linguistic input streams. An automotive telematics application will form the basis of this demonstration. The audience will be encouraged to participate in this demonstration.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.