Abstract

The ultra-low latency transmission, high speed broadband and ubiquitous access points in modern cities have greatly alleviated the last mile problem of live streaming services. Yet, the sophisticated video compression of high definition videos is still inevitable before transmission, which leads to overwhelming workloads in the light-weighted video collectors. In this paper, we are motivated to explore the computation offloading of live streaming video compression for UAVs, which are featured by limited capacity and fixed trajectory. In particular, the global motion model is utilized for inter frame residual coding instead of motion vector estimation, which usually takes more than 50% computation complexity in traditional H.264/265. We propose an edge-based Joint Video Coding (eJVC) scheme, which can save up to 84.04% encoding complexity of UAV video collector. Specifically, the attention network is used to distinguish foreground blocks and background blocks in the frame. LSTM neural network is used on the edge server to predict the auxiliary data needed by UAV video collector for video coding. In addition, our proposed solution can also accommodate the movement direction change when the control signal is notified in advance. Finally, the prototype system is implemented with the real world data set, and the experiment results show that our proposed solution can significantly save the computation time with little bitrate distortion performance loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call