Abstract

In this paper, we address the problem of egocentric video co-summarization. We show how a shot level accurate summary can be obtained in a time-efficient manner using random walk on a constrained graph in transfer learned feature space with label refinement. While applying transfer learning, we propose a new loss function capturing egocentric characteristics in a pre-trained ResNet on the set of auxiliary egocentric videos. Transfer learning is used to generate i) an improved feature space and ii) a set of labels to be used as seeds for the test egocentric video. A complete weighted graph is created for a test video in the new transfer learned feature space with shots as the vertices. We derive two types of cluster label constraints in form of Must-Link (ML) and Cannot-link (CL) based on the similarity of the shots. ML constraints are used to prune the complete graph which is shown to result in substantial computational advantage, especially, for the long duration videos. We derive expressions for the number of vertices and edges for the ML-constrained graph and show that this graph remains connected. Random walk is applied to obtain labels of the unmarked shots in this new graph. CL constraints are applied to refine the cluster labels. Finally, shots closest to individual cluster centres are used to build the summary. Experiments on the short duration videos as in CoSum and TVSum datasets and long duration videos as in ADL and EPIC-Kitchens datasets clearly demonstrate the advantage of our solution over several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call