Abstract

The live streaming of omnidirectional video (ODV) on mobile devices demands considerable network resources; thus, current mobile networks are incapable of providing users with high-quality ODV equivalent to conventional flat videos. We observe that mobile devices, in fact, underutilize graphics processing units (GPUs) while processing ODVs; hence, we envisage an opportunity exists in exploiting video super-resolution (VSR) for improved ODV quality. However, the device-specific discrepancy in GPU capability and dynamic behavior of GPU frequency in mobile devices create a challenge in providing VSR-enhanced ODV streaming. In this paper, we propose OmniLive, an on-device VSR system for mobile ODV live streaming. OmniLive addresses the dynamicity of GPU capability with an anytime inference-based VSR technique called Omni SR. For Omni SR, we design a VSR deep neural network (DNN) model with multiple exits and an inference scheduler that decides on the exit of the model at runtime. OmniLive also solves the performance heterogeneity of mobile GPUs using the Omni neural architecture search (NAS) scheme. Omni NAS finds an appropriate DNN model for each mobile device with Omni SR-specific neural architecture search techniques. We implemented OmniLive as a fully functioning system encompassing a streaming server and Android application. The experiment results show that our anytime VSR model provides four times upscaled videos while saving up to 57.15% of inference time compared with the previous super-resolution model showing the lowest inference time on mobile devices. Moreover, OmniLive can maintain 30 frames per second while fully utilizing GPUs on various mobile devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call