Abstract

The exponential growth in the production of video contents in different industries causes an urgent need for effective video summarization (VS) techniques, in order to get an optimal storage and preservation of key information in the video. Compared to other domains, industrial videos are more challenging to process, as they usually contain diverse and complex events, which make their online processing a difficult task. In this article, we introduce an online system for intelligent video capturing, coarse and fine redundancy removal, and summary generation. First, we capture video data through resource-constrained devices in an industrial Internet of Things network, equipped with vision sensors and apply coarse redundancy removal through the comparison of low-level features. Second, we transmit the resulting frames to the cloud for detailed analysis, where sequential features are extracted for the selection of candidate keyframes. Finally, we refine the candidate keyframes in order to discriminate those with maximum information as part of the summary. The key contributions of this article include the coarse and fine refining of video data implemented over resource-restricted devices and the presentation of important data in the form of a summary. Experiments 1 1 [Online]. Available: https://github.com/tanveer-hussain/DeepRes-Video-Summarization . over publicly available datasets evince a 0.3-unit increase in the F1 score when compared to state-of-the-art and with reduced time complexity. Furthermore, we provide convincing results on our newly created dataset in an industrial environment, which is made publicly available for the research community along with its labeled ground truth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call