Abstract

This thesis studies the fundamental question: what vocabulary of concepts are suited for machines to describe video content? The answer to this question involves two annotation steps: First, to specify a list of concepts by which videos are described. Second, to label a set of videos per concept as its examples or counter examples. Subsequently, the vocabulary is constructed as a set of video concept detectors learned from the provided annotations by supervised learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call