Abstract

Automated emotion recognition systems utilizing physiological signals are essential for affective computing and intelligent interaction. Combining the multiple physiological signals is more precise and effective in accurately assessing a person’s emotional state. These automated emotion recognition systems using conventional machine learning techniques require complete access to the physiological data for emotion state classification, compromising sensitive data privacy. Federated Learning (FL) resolves this issue by preserving the user’s privacy and sensitive physiological data while recognizing emotions. However, existing FL methods have limitations in handling data heterogeneity in the physiological data and do not measure communication efficiency and scalability. In response to these challenges, this paper proposes a unique novel framework called AFLEMP (Attention-based Federated Learning for Emotion recognition using Multi-modal Physiological data) integrating attention mechanism-based Transformer with an Artificial Neural Network (ANN) model. The framework reduces two types of data heterogeneity: (1) Variation Heterogeneity (VH) in multi-modal EEG, GSR, and ECG physiological signal data using attention mechanisms and (2) Imbalanced Data Heterogeneity (IDH) in the FL environment using scaled weighted federated averaging. This paper validates the proposed AFLEMP framework on two publicly available emotion datasets, AMIGOS and DREAMER, achieving an average accuracy of 88.30% and 84.10%, respectively. The proposed AFLEMP framework proves robust, scalable, and efficient in communication. AFLEMP is the first FL framework to propose for emotion recognition using multi-modal physiological signals while reducing data heterogeneity and outperforming existing FL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call