Abstract

In this paper, a new technique is proposed for the automatic generation of a preview sequence of a feature film. The input video is decomposed into a number of basic components called shots. In this step, the proposed shot change detection algorithm is able to detect both the abrupt and gradual transition boundary. Then, shots are grouped into semantic-related scenes by taking into account the visual characteristics and temporal dynamics of video. Finally, by making use of an empirically motivated approach, the intense-interaction and action scenes are extracted to form the abstracting video. Compared with related works which integrate visual and audio information, our visual-based approach is computationally simple yet effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call