Abstract

With the improvement of edge-based autonomous systems such as mobile Industrial IoT(IIoT) networks, edge devices can capture and upload videos with increasing bitrates. Massive edge-computing end nodes are eager for adequate multimedia data to satisfy the requirements of real-time video services. However, existing encoding standards for video services in Web 2.0 are specifically designed for something other than IoT video streaming. We have improved our Adaptive Compression-Reconstruction framework ACORN to obtain ACORN+, based on compressed sensing and recent advances in deep learning. At end nodes, we compress multiple sequential video frames into a single frame to reduce video volume. Given that multiple kinds of intelligent tasks are expected to be finished on the device side, we also designed a device-cloud collaboration scheme where deep learning-based algorithms can be executed on both the device and server sides. Experiments reveal that video analytics can be conducted on compressed frames. Taking action recognition as a device-cloud collaboration use case, we find ACORN+ obtains more than 3x speedup on compressed frames. The reconstruction algorithm in ACORN+ is with 1- 4 dB improvements. Moreover, the encoding time cost and the encoded video volume are reduced by more than 4x under the ACORN+ framework. 1

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.