Abstract

360-degree video is an integral part of Virtual Reality systems. However, transmission of 360 video over the network is challenging due to its large size. To reduce the network bandwidth requirement of 360-degree video, Viewport Adaptive Streaming (VAS) has been proposed. A key issue in VAS is how to estimate future user viewing directions. In this paper, we carry out an evaluation of typical viewport estimation methods for VAS. It is found that Long-Short Term Memory (LSTM)based method achieves the best trade-off between the accuracy and redundancy. Using cross-user behaviors achieve the highest accuracy at the expense of high redundancy. Meanwhile, the widely used linear regression-based method has performance comparable to that of the simple method using the last viewport position. In addition, we also found that all considered methods suffer significant degradation in performance when the prediction horizon increases beyond 1 second.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.