Abstract

Emotion recognition holds significant importance in human communication, benefiting various domains like human-computer interaction, affective computing, and social robotics. Recent interest lies in exploiting multimodal data, encompassing audio, visual, and other cues, to enhance emotion recognition systems. However, most available datasets predominantly focus on Western cultures, overlooking the diverse emotional expressions in regions like India. Moreover, existing datasets often neglect complex emotions like sympathy and awe. To address these limitations, we introduce "IndEMoVis," a novel multimodal dataset of Indian emotions. It comprises 122 recorded audio visual responses during conversations between pairs of individuals. The dataset includes 61 participants, consisting of 25 females and 36 males aged 18 to 21, primarily from Maharashtra and Gujarat states in India. It encompasses nine emotions: Neutral, Happiness, Sadness, Surprise, Disgust, Anger, Fear, Awe, and Sympathy. The annotation process involves a three-step procedure, ensuring accurate emotion labeling. Additionally, annotations are provided for intensity and confidence levels. IndEMoVis dataset aims to support the research community in affective computing by improving conversation abilities, analyzing emotional intelligence, and evaluating responses in debates. Its cultural relevance and inclusion of complex emotions offer valuable insights into emotion recognition for diverse contexts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call