Abstract
We introduce the first steps in a developmental robot called MEI (multimodal emotional intelligence), a robot that can understand and express emotions in voice, gesture and gait using a controller trained only on voice. Whereas it is known that humans can perceive affect in voice, movement, music and even as little as point light displays, it is not clear how humans develop this skill. Is it innate? If not, how does this emotional intelligence develop in infants? The MEI robot develops these skills through vocal input and perceptual mapping of vocal features to other modalities. We base MEI’s development on the idea that motherese is used as a way to associate dynamic vocal contours to facial emotion from an early age. MEI uses these dynamic contours to both understand and express multimodal emotions using a unified model called SIRE (Speed, Intensity, irRegularity, and Extent). Offline experiments with MEI support its cross-modal generalization ability: a model trained with voice data can recognize happiness, sadness, and fear in a completely different modality—human gait. User evaluations of the MEI robot speaking, gesturing and walking show that it can reliably express multimodal happiness and sadness using only the voice-trained model as a basis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Autonomous Mental Development
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.