With the improvement of edge-based autonomous systems such as mobile Industrial IoT(IIoT) networks, edge devices can capture and upload videos with increasing bitrates. Massive edge-computing end nodes are eager for adequate multimedia data to satisfy the requirements of real-time video services. However, existing encoding standards for video services in Web 2.0 are specifically designed for something other than IoT video streaming. We have improved our Adaptive Compression-Reconstruction framework ACORN to obtain ACORN+, based on compressed sensing and recent advances in deep learning. At end nodes, we compress multiple sequential video frames into a single frame to reduce video volume. Given that multiple kinds of intelligent tasks are expected to be finished on the device side, we also designed a device-cloud collaboration scheme where deep learning-based algorithms can be executed on both the device and server sides. Experiments reveal that video analytics can be conducted on compressed frames. Taking action recognition as a device-cloud collaboration use case, we find ACORN+ obtains more than 3x speedup on compressed frames. The reconstruction algorithm in ACORN+ is with 1- 4 dB improvements. Moreover, the encoding time cost and the encoded video volume are reduced by more than 4x under the ACORN+ framework. 1
Read full abstract