Abstract

In surveillance systems, vast amounts of data are collected from different sources to monitor ongoing video activities. Usually, video data is passively captured by visual sensors and forwarded to the command center, without intelligent Edge functionalities to select essential video information and locally detect abnormal events therein. These shortcomings often seen in practical surveillance scenarios lead to a wastage of storage resources and make data management, retrieval, and informed decision complex and time-consuming. Therefore, endowing visual sensors with video summarization capabilities is of utmost importance for smarter surveillance systems. This study departs from this rationale to propose an efficient neural networks-based video summarization method for surveillance systems. The proposed approach learns how to optimally segment a video by measuring informative features from the data flow, followed by memorability and entropy to maintain the relevance and diversity of the video summary produced on the Edge. Experimental results over benchmark datasets reveal that the proposed scheme outperforms other state-of-the-art counterparts and proves the effectiveness of our method for video summarization in smart cities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call