Abstract

AbstractHumans can express their own emotion and estimate the emotional states of others during communication. This paper proposes a unified model that can estimate the emotional states of others and generate emotional self-expressions. The proposed model utilizes a multimodal restricted Boltzmann machine (RBM) —a type of stochastic neural network. RBMs can abstract latent information from input signals and reconstruct the signals from it. We use these two characteristics to rectify issues affecting previously proposed emotion models: constructing an emotional representation for estimation and generation for emotion instead of heuristic features, and actualizing mental simulation to infer the emotion of others from their ambiguous signals. Our experimental results showed that the proposed model can extract features representing the distribution of categories of emotion via self-organized learning. Imitation experiments demonstrated that using our model, a robot can generate expressions better than with a direct mapping mechanism when the expressions of others contain emotional inconsistencies.Moreover, our model can improve the estimated belief in the emotional states of others through the generation of imaginary sensory signals from defective multimodal signals (i.e., mental simulation). These results suggest that these abilities of the proposed model can facilitate emotional human–robot communication in more complex situations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.