Abstract

There are still many design problems for service robots. Among the most important problems is human-robot integration, a problem that has many edges, both at the level of physical interaction and at the emotional level. The research group is evaluating different algorithms on its robotic platform ARMOS TurtleBot. Among these algorithms, it recently developed a scheme for the identification of a person's emotions from identifiable facial characteristics in a person's face. Under laboratory conditions, the scheme reached a 92% success rate, however, in low-light conditions or when the person had the face partially covered this rate decreased considerably. Consequently, we propose the design of an alternative loop as a support to increase the success rate by estimating the emotional state of the person from the voice. For this purpose, we train a convolutional neural network with spectral images built from audio characteristics of the same 7 emotions used in the first algorithm. The model achieved a 69% hit rate, and together with our face algorithm raised the total performance of the system to 96.5%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call