Abstract

These years, emotion recognition has been one of the hot topics in computer science and especially in Human-Robot Interaction (HRI) and Robot-Robot Interaction (RRI). By emotion (recognition and expression), robots can recognize human behavior and emotion better and can communicate in a more human way. On that point are some research for unimodal emotion system for robots, but because, in the real world, Human emotions are multimodal then multimodal systems can work better for the recognition. Yet, beside this multimodality feature of human emotion, using a flexible and reliable learning method can help robots to recognize better and makes more beneficial interaction. Deep learning showed its force in this area and here our model is a multimodal method which use 3 main traits (Facial Expression, Speech and gesture) for emotion (recognition and expression) in robots. We implemented the model for six basic emotion states and there are some other states of emotion, such as mix emotions, which are really laborious to be picked out by robots. Our experiments show that a significant improvement of identification accuracy is accomplished when we use convolutional Neural Network (CNN) and multimodal information system, from 91 % reported in the previous research [27] to 98.8 %.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.