Abstract

This paper proposes an approach to integrate multimodal events--both user-generated, e.g., audio recognizer, motion sensors; and user-consumed, e.g., speech synthesizer, haptic synthesizer--into programming languages for the declarative specification of multimedia applications. More precisely, it presents extensions to the NCL (Nested Context Language) multimedia language. NCL is the standard declarative language for the development of interactive applications for Brazilian Digital TV and an ITU-T Recommendation for IPTV services. NCL applications extended with the multimodal features are presented as results. Historically, Human-Computer Interaction research community has been focusing on user-generated modalities, through studies on the user interaction. On the other hand, Multimedia community has been focusing on output modalities, through studies on timing and multimedia processing. The proposals in this paper is an attempt to integrate concepts of both research communities in a unique high-level programming framework, which aims to assist the authoring of multimedia/multimodal applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call