Abstract

Social robots are robots designed to interact and communicate directly with humans, following traditional social norms. However, many of these current robots operate in discrete settings with predefined expectations for specific social interactions. In order for these machines to operate in the real world, they must be capable of understanding the multiple factors that contribute to human-human interaction. One such factor is emotional intelligence. Emotional intelligence allows one to consider the emotional state of another in order to motivate, plan, and achieve one's desires. One common method of analyzing the emotional state of an individual involves analyzing the emotion displayed on their face. Several artificial intelligence (AI) systems have been developed to conduct this task. These systems are often classifiers trained using a variety of machine learning techniques which require large amounts of training data. As such, they are susceptible to biases that may appear during performance analyses due to disproportions existing in training datasets. Children, in particular, are often less represented in the primary datasets of annotated faces used for training such emotion classification systems. This work seeks to first analyze the extent of these performance differences in commercial systems, then to present new computational techniques that work to mitigate some of the effects of minimal representation in datasets, and to finally present a social robot which utilizes an improved emotional AI to interact with children in various scenarios where emotional intelligence is key to successful human-robot interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call