Abstract

In this paper, we propose a prediction algorithm, the combination of Long Short-Term Memory (LSTM) and attention model, based on machine learning models to predict the vision coordinates when watching 360-degree videos in a Virtual Reality (VR) or Augmented Reality (AR) system. Predicting the vision coordinates while video streaming is important when the network condition is degraded. However, the traditional prediction models such as Moving Average (MA) and Autoregression Moving Average (ARMA) are linear so they cannot consider the nonlinear relationship. Therefore, machine learning models based on deep learning are recently used for nonlinear predictions. We use the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural network methods, originated in Recurrent Neural Networks (RNN), and predict the head position in the 360-degree videos. Therefore, we adopt the attention model to LSTM to make more accurate results. We also compare the performance of the proposed model with the other machine learning models such as Multi-Layer Perceptron (MLP) and RNN using the root mean squared error (RMSE) of predicted and real coordinates. We demonstrate that our model can predict the vision coordinates more accurately than the other models in various videos.

Highlights

  • Virtual Reality (VR) is a simulated experience that is similar or different from the real world

  • We can conclude that applying the attention model reduces the prediction error and improves the performance of the fundamental neural network models for all videos

  • We created a prediction model based on the Attention model, which is one of the machine learning methods using Recurrent Neural Networks (RNN)

Read more

Summary

Introduction

Virtual Reality (VR) is a simulated experience that is similar or different from the real world. VR can be applied to entertainment and education Another type of VR is Augmented Reality (AR) which contains a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects [1]. To implement these systems, they require VR headsets to generate images, sounds, and other sensations. As it is not possible to guarantee the low-latency network in any place, we propose another method to implement a real-time VR and AR system This method predicts the head movement of various users watching a 360-degree video with HMD. It can automatically track the focus in the real-time VR system even though the network condition is poor

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call