Abstract
Metadata on multimedia documents may help to describe their content and make their processing easier, for example by identifying events in temporal media, as well as carrying descriptive information for the overall resource. Metadata is essentially static and may be associated with, or embedded in, the multimedia contents. The aim of this paper is to present a proposal for multimedia documents annotation, based on modeling and unifying features elicited from content and structure mining. Our approach relies on the availability of annotated metadata representing segment content and structure as well as segment transcripts. Temporal and spatial operators are also taken into account when annotating documents. Any feature is identified into a descriptor called “meta-document”. These meta-documents are the basis of querying by adapted query languages.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have