Abstract

Viewport prediction and prefetch have an important influence on VR video streaming performance. This work proposes a novel federated learning-based viewport prediction model training algorithm, ComPer-FedAvg. The proposed algorithm leverages a VR video’s common viewing pattern and users’ personal viewing patterns to train the prediction model in a distributed and privacy-preserving manner. Further, considering the VR video viewport prediction accuracy, a stochastic game is formulated to solve the VR streaming network’s communication resource allocation problem, where limited communication resource blocks are auctioned to users to achieve the optimal overall VR viewing experience. For each user, the auction is decomposed into two disjoint subproblems, namely, the optimal number of data rate requesting and true value claiming (bidding). The optimal true value claiming has been analytically proved to be equal to the VR viewing reward with given data rate. Due to the lack of global information when users request data rate, we reformulate users’ data rate requesting problem as a POMDP problem. A novel deep reinforcement learning algorithm is adopted to solve the problem. Evaluation and simulation results show the proposed viewport prediction and VR streaming schemes outperform conventional solutions in terms of prediction accuracy and VR viewing experience.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call