In recent years, 360-degree videos have gained significant traction due to their capacity to provide immersive experiences. However, the adoption of 360-degree videos substantially escalates bandwidth demands, necessitating approximately four to ten times more bandwidth than traditional video formats do. This presents a considerable challenge in maintaining high-quality videos in environments characterized by limited bandwidth or unstable networks. A trend has emerged where client-side computational power and deep neural networks are employed to enhance video quality while mitigating bandwidth requirements within contemporary video delivery systems. These approaches segment a video into discrete chunks and apply super resolution (SR) models to each segment, streaming low-resolution (LR) chunks alongside their corresponding SR models to the client. Although these methods enhance both video quality and transmission efficiency for conventional videos, they impose greater computational resource demands when applied to 360-degree content, thereby constraining widespread implementation. This paper introduces an innovative method called HVASR for 360-degree videos that leverages viewport information for more precise segmentation and minimizes model training costs as well as bandwidth requirements. Additionally, HVASR incorporates a viewport-aware training strategy that is aimed at further enhancing performance while reducing computational expenses. The experimental results demonstrate that HVASR achieves an average utility increase ranging from 12.46% to 40.89% across various scenes.
Read full abstract