Abstract

Affective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call