Abstract

A plethora of studies has been conducted to detect and reduce cybersickness in real-time. However, prior attempts to detect and minimize cybersickness after its onset may be ineffective as the onset tends to persist beyond its first occurrence. By forecasting the onset of cybersickness, it may be possible to mitigate the severity of cybersickness through earlier interventions. This research proposed a multimodal deep fusion approach to forecast cybersickness from the user’s physiological, head-tracking, and eye-tracking data. We proposed several hybrid multimodal deep fusion neural networks with Long short-term memory (LSTMs), Neural basis expansion analysis for interpretable time series forecasting(NBEATs) and Deep Temporal Convolutional Networks(DeepTCN) neural models to forecast cybersickness 30-60s in advance to its onset. To validate our proposed approach, we recruited 30 participants who were immersed in five virtual reality simulations. We collected eye-tracking, head-tracking, heart rate, and galvanic skin response data and used the fast-motion scale as ground truth. Our results suggest that the DeepTCN model with our proposed multimodal fusion network can forecast cybersickness onset 60 seconds in advance with a root-mean-square error of 0.49 (on a scale from 0-10). Furthermore, our results demonstrated that fusing eye tracking, heart rate, and galvanic skin response data outperformed other data fusion approaches. This research clarifies how early cybersickness can be forecast, paving the way for future research on early cybersickness mitigation approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call