Abstract

AbstractRecently, video summarization (VS) has emerged as one of the most effective tools for rapidly understanding video big data. Dictionary selection based on self‐representation and sparse regularization is consistent with the requirement of VS, which aims to represent the original video with little reconstruction error by a small number of video frames. However, one crucial issue is that the existing methods mainly use a single view feature, which is not sufficient enough for acquiring the full pictorial details and affects the quality of the produced video summary. Although a few methods use more than one features, they only directly concatenate the features, which does not take advantage of the relationship of different features. Considering the complementarity of shallow and deep features, multiview feature co‐factorization based dictionary selection for VS is proposed in this paper to use the common information of both view features for VS. Specifically, two view features are used to fully exploit the full pictorial information of video frames, then the common information of two different views is extracted through coupled matrix factorization to conduct the dictionary selection for VS. Experiments have been carried out on two benchmark datasets, and results have demonstrated the effectiveness and superiority of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call