Abstract
There are still many design problems for service robots. Among the most important problems is human-robot integration, a problem that has many edges, both at the level of physical interaction and at the emotional level. The research group is evaluating different algorithms on its robotic platform ARMOS TurtleBot. Among these algorithms, it recently developed a scheme for the identification of a person's emotions from identifiable facial characteristics in a person's face. Under laboratory conditions, the scheme reached a 92% success rate, however, in low-light conditions or when the person had the face partially covered this rate decreased considerably. Consequently, we propose the design of an alternative loop as a support to increase the success rate by estimating the emotional state of the person from the voice. For this purpose, we train a convolutional neural network with spectral images built from audio characteristics of the same 7 emotions used in the first algorithm. The model achieved a 69% hit rate, and together with our face algorithm raised the total performance of the system to 96.5%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.