Abstract

Declarative multimedia authoring languages allows authors to combine multiple media objects, generating a range of multimedia presentations. Novel multimedia applications, focusing at improving user experience, extend multimedia applications with multisensory content. The idea is to synchronize sensory effects with the audiovisual content being presented. The usual approach for specifying such synchronization is to mark the content of a main media object (e.g. a main video) indicating the moments when a given effect has to be executed. For example, a mark may represent when snow appears in the main video so that a cold wind may be synchronized with it. Declarative multimedia authoring languages provide a way to mark subparts of a media object through anchors. An anchor indicates its begin and end times (video frames or audio samples) in relation to its parent media object. The manual definition of anchors in the above scenario is both not efficient and error prone (i) when the main media object size increases, (ii) when a given scene component appears several times and (iii) when the application requires marking scene components. This paper tackles this problem by providing an approach for creating abstract anchors in declarative multimedia documents. An abstract anchor represents (possibly) several media anchors, indicating the moments when a given scene component appears in a media object content. The author, therefore is able to define the application behavior through relationships among, for example, sensory effects and abstract anchors. Prior to executing, abstract anchors are automatically instantiated for each moment a given element appears and relationships are cloned so the application behavior is maintained. This paper presents an implementation of the proposed approach using NCL (Nested Context Language) as the target language. The abstract anchor processor is implemented in Lua and uses available APIs for video recognition in order to identify the begin and end times for abstract anchor instances. We also present an evaluation of our approach using a real world use cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call