Abstract

Movie highlights are composed of video segments that induce a steady increase of the audience’s excitement. Automatic movie highlights’ extraction plays an important role in content analysis, ranking, indexing, and trailer production. To address this challenging problem, previous work suggested a direct mapping from low-level features to high-level perceptual categories. However, they only considered the highlight as intense scenes, like fighting, shooting, and explosions. Many hidden highlights are ignored because their low-level features’ values are too low. Driven by cognitive psychology analysis, combined top-down and bottom-up processing is utilized to derive the proposed two-way excitement model. Under the criteria of global sensitivity and local abnormality, middle-level features are extracted in excitement modeling to bridge the gap between the feature space and the high-level perceptual space. To validate the proposed approach, a group of well-known movies covering several typical types is employed. Quantitative assessment using the determined excitement levels has indicated that the proposed method produces promising results in movie highlights’ extraction, even if the response in the low-level audio-visual feature space is low.

Highlights

  • Human–computer interactions (HCI) are crucial for user-friendly interactions between human users and computer systems

  • We propose a new method to understand highlight affective segments in movies, which is useful for further sensor and HCI techniques

  • For movie highlights’ extraction, modeling of the excitement time curve is emphasized to fulfill the task in deriving global sensitivity and local abnormality values from the values of low-level features computed in a film [31]

Read more

Summary

Introduction

Human–computer interactions (HCI) are crucial for user-friendly interactions between human users and computer systems. HCI is requested to provide effective input/output, it is expected to understand the intentions of users and the environment for better service-oriented interactions. These have raised new challenges beyond conventional multimodal HCI, including audio, images, video, and graphics [1,2,3]. The proposed technology used to develop such a sensor is automatic video content analysis, which aims to reveal both the objective entities and hidden subjective feelings or emotions from movies It overcomes the problems caused by the explosively-increasing repository of online movies. It can select movies more from a large video database than a typical viewer could

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.