Abstract

The continuously growing number of user-generated videos has increased the need for efficient browsing through content collections and repositories, which in turn requires descriptive, yet compact representations. To this goal, a popular approach is to create a visual summary, which is by far more expressive compared to other approaches, e.g., textual descriptions. In this work, we present a video summarization approach that is based on the extraction and fusion of audio and visual features, in order to produce dynamic video summaries, i.e., comprising of the most important video segments of the original video, while preserving their temporal order. Based on the extracted features, each segment is classified as “interesting,” or “uninteresting,” thus included in the final summary, or not. The novelty of our approach is that prior to classification, the fused features are fuzzified, thus becoming more intuitive and robust to uncertainty. We evaluate our approach using a large dataset of user-generated videos and demonstrate that fuzzy features are able to boost classification performance, providing for more concrete video summaries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call