Abstract

Deep learning research has allowed significant advances in several areas of multimedia, especially in tasks related to speech processing, hearing, and computational vision. Particularly, recent usage scenarios in hypermedia domain already use such deep learning tasks to build applications that are sensitive to its media content semantics. However, the development of such scenarios is usually done from scratch. In particular, current hypermedia standards such as HTML do not fully support such kind of development. To support such development, we propose that a hypermedia language should be extended to support: (1) describe learning using structured media datasets; (2) recognize content semantics of the media elements in presentation time; (3) use the recognized semantics elements as events in during the multimedia. To illustrate our approach, we extended the NCL language, and its model NCM, to support such features. NCL (Nested Context Language) is the declarative language for developing interactive applications for Brazilian Digital TV and an ITU-T Recommendation for IPTV services. As a result of the work, it is presented a usage scenario to highlight how the extended NCL supports the development of content-aware hypermedia presentations, attesting the expressiveness and applicability of the model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call