Abstract

This paper proposes a novel scene boundary detector by considering different features appropriate for definition changes of scenes according to target services or tasks. In the proposed method, the information in shots is categorized into two groups: relational and sequential information. Relational information is acquired by the multi-layered convolution neural networks by merging and embedding similarity vectors from visual and audio features. Sequential information that contains particular patterns of continuous shots is handled with dual recurrent neural networks. The different definitions of scenes are reflected in the proposed method by supervised parameter estimation with a sampling method. Scene boundaries are rarely observed in video content. Thus, it results in skewed class distribution. The sampling method tries to expand instances in scene boundary using reverse order shots, while it reduces the number of non-boundary shots by variance preserved shot filtering. A focal loss is finally adopted for the training process to lead better parameters from an imbalanced dataset. The proposed method is evaluated with three datasets constructed with real-world movies. We empirically proved that different definitions of scene boundary could affect the performance of scene boundary detection through experiments. The proposed deep neural networks with both relational and sequential information show the ability to handle diverse scene definitions in experiments. With supervised learning, the proposed method can reflect the definition bias in each dataset. As a result, the proposed method shows its effectiveness in handling different types of information and adopting other scene definitions by achieving state-of-the-art performances in two benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call