Abstract

The video analytics pipeline consumes substantial network and computing resources, and in many scenarios videos are sent from cameras to servers. Due to limited camera-side resources, most existing systems are frame-level filtering approaches that filter out frames that are unchanged or irrelevant to applications before transmitting data. However, video applications usually care about the regions of interest (RoIs) of frames rather than background regions, and frame-level filtering fails to consider the spatial redundancy of intra-frame regions. We argue that video analytics should adopt a region-level filtering approach. Unfortunately, how to perform region-aware filtering on cameras without GPU is a significant challenge. To overcome this, we built RegionFilter, which achieves region awareness on resource-constrained edge nodes through feedback results from server-side DNN models. RegionFilter splits each video segment into multiple sub-segments in spatial distribution through region awareness. Then different configuration parameters (e.g., resolution, quantization parameter (QP)) are selected according to the video application, and the sub-segments are encoded into sub-streamings with different qualities that are uploaded to servers for further processing. Experiments on various video tasks and datasets show that RegionFilter achieves considerable filtering benefits (bandwidth savings of 11.7–41.2% when transmitting HD video) while always meeting the desired accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call