Abstract

Currently, most declarative languages used to define multimedia documents do not support the specification of an interactive application with multiple users and multimodal interaction. To handle multimodal and multi-user interaction in these languages, the author of multimedia applications must write custom code in an imperative language to be included in the declarative document. To fill in this gap, this work proposes an extension of NCM (Nested Context Language), NCL (Nested Context Language) and Ginga-NCL to provide high-level abstraction multimodal and multi-user interaction specifications. As a proof of concept, we developed a multimedia application that uses multi-user voice interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call