Abstract

Data reusability is an important feature of current research, just in every field of science. Modern research in Affective Computing, often rely on datasets containing experiments-originated data such as biosignals, video clips, or images. Moreover, conducting experiments with a vast number of participants to build datasets for Affective Computing research is time-consuming and expensive. Therefore, it is extremely important to provide solutions allowing one to (re)use data from a variety of sources, which usually demands data integration. This paper presents the Graph Representation Integrating Signals for Emotion Recognition and Analysis (GRISERA) framework, which provides a persistent model for storing integrated signals and methods for its creation. To the best of our knowledge, this is the first approach in Affective Computing field that addresses the problem of integrating data from multiple experiments, storing it in a consistent way, and providing query patterns for data retrieval. The proposed framework is based on the standardized graph model, which is known to be highly suitable for signal processing purposes. The validation proved that data from the well-known AMIGOS dataset can be stored in the GRISERA framework and later retrieved for training deep learning models. Furthermore, the second case study proved that it is possible to integrate signals from multiple sources (AMIGOS, ASCERTAIN, and DEAP) into GRISERA and retrieve them for further statistical analysis.

Highlights

  • With the increase of research in Affective Computing field [1], the number of published datasets related to emotion processing from experiments is growing [2]

  • The data are dispersed, stored in plenty of formats, different datasets provide diverse biosignals and emotional states which are retrieved with various frequencies or even irregularly spaced

  • The framework proposed in this paper addresses the nagging problem of data integration and reusability in Affective Computing

Read more

Summary

Introduction

With the increase of research in Affective Computing field [1], the number of published datasets related to emotion processing from experiments is growing [2]. They include biosignals like EEG, ECG, GSR or facial expressions, along with contextual information and sometimes emotional states. The data are dispersed, stored in plenty of formats, different datasets provide diverse biosignals and emotional states which are retrieved with various frequencies or even irregularly spaced Such circumstances, connected with a lack of unified storage for experiment originating data and semi-automatic methods of integrating them, make it difficult to use data from various sources to conduct new research in the field of Affective Computing.

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call