Abstract

We propose a real-time convolutional neural network (CNN) training and compression method for delivering high-quality live video even in a poor network environment. The server delivers a low-resolution video segment along with the corresponding CNN for super resolution (SR), after which the client applies the CNN to the segment in order to recover high-resolution video frames. To generate a trained CNN corresponding to a video segment in real-time, our method rapidly increases the training accuracy by promoting the overfitting property of the CNN while also using curriculum-based training. In addition, assuming that the pretrained CNN is already downloaded on the client side, we transfer only residual values between the updated and pretrained CNN parameters. These values can be quantized with low bits in real time while minimizing the amount of loss, as the distribution range is significantly narrower than that of the updated CNN. Quantitatively, our neural-enhanced adaptive live streaming pipeline (NEALS) achieves higher SR accuracy and a lower CNN compression loss rate within a constrained training time compared to the state-of-the-art CNN training and compression method. NEALS achieves 15 to 48% higher quality of the user experience compared to state-of-the-art neural-enhanced live streaming systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call