Abstract

Research in affective computing requires ground truth data for training and benchmarking computational models for machine-based emotion understanding. In this paper, we propose a large video database, namely LIRIS-ACCEDE, for affective content analysis and related applications, including video indexing, summarization or browsing. In contrast to existing datasets with very few video resources and limited accessibility due to copyright constraints, LIRIS-ACCEDE consists of 9,800 good quality video excerpts with a large content diversity. All excerpts are shared under creative commons licenses and can thus be freely distributed without copyright issues. Affective annotations were achieved using crowdsourcing through a pair-wise video comparison protocol, thereby ensuring that annotations are fully consistent, as testified by a high inter-annotator agreement, despite the large diversity of raters' cultural backgrounds. In addition, to enable fair comparison and landmark progresses of future affective computational models, we further provide four experimental protocols and a baseline for prediction of emotions using a large set of both visual and audio features. The dataset (the video clips, annotations, features and protocols) is publicly available at: http://liris-accede.ec-lyon.fr/.

Highlights

  • A FFECTIVE video content analysis aims at automatic recognition of emotions elicited by videos

  • While there are many databases composed of facial expression videos for emotion recognition, there are not many databases of video clips annotated according to the emotions they induce in viewers (Table 1)

  • Even if increased efforts have recently been made to standardize film clip databases, there are no multimedia databases annotated along induced emotional axes dealing with the full spectrum of emotions in movies that are large enough to be used in machine learning and that do not suffer from copyright infringement

Read more

Summary

Introduction

A FFECTIVE video content analysis aims at automatic recognition of emotions elicited by videos. While major progress has been achieved in computer vision for visual object detection, scene understanding and high level concept recognition, a natural further step is modeling and recognition of affective concepts. This has received increasing interest from research communities, e.g., computer vision, machine learning, with an overall goal of endowing computers with human-like perception capabilities. The purpose of the database is to illustrate key principles of affective computing instead of applying it to machine learning It is made up of 50 clips: naturalistic and induced data ranging from 5 seconds to 3 minutes.

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call