Abstract

In this article, a head gesture recognition system is developed in order to identify emotional inputs and provide them to an adaptive music system (LitSens) in virtual reality applications, improving virtual presence in the process. Two iterations of this system, both founded on neural networks, are presented: the first one is based on a multi-layer perceptron, whereas the second one consists of a hybrid one-dimensional convolutional neural network. In both cases, the system is able to recognise fear by analysing head gestures. Whereas the first implementation is quicker when recognising this emotion, the second one is slower, but much more accurate, which makes it a better option overall for soundtrack adaptation. An experiment is then detailed, aimed towards validating the behaviour of a gestural recogniser when detecting fear in players. The results achieved through this validation are generally positive, but evince the need for an improvement in terms of system responsiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call