Developing an efficient and generalizable method for inter-subject emotion recognition from neural signals is an emerging and challenging problem in affective computing. In particular, human subjects usually have heterogeneous neural signal characteristics and variable emotional activities that challenge the existing recognition algorithms from achieving high inter-subject emotion recognition accuracy. 
Approach. 
In this work, we propose a model-agnostic meta-learning algorithm to learn an adaptable and generalizable Electroencephalogram (EEG)-based emotion decoder at the subject's population level. Different from many prior end-to-end emotion recognition algorithms, our learning algorithms include a pre-training step and an adaptation step. Specifically, our meta-decoder first learns on diverse known subjects and then further adapts it to unknown subjects with one-shot adaptation. More importantly, our algorithm is compatible with a variety of mainstream machine learning decoders for emotion recognition.
Main results.
We evaluate the adapted decoders obtained by our proposed algorithm on three Emotion-EEG datasets: SEED, DEAP, and DREAMER. Our comprehensive experimental results show that the adapted meta-emotion decoder achieves state-of-the-art inter-subject emotion recognition accuracy and outperforms the classical supervised learning baseline across different decoder architectures. 
Significance. 
Our results hold promise to incorporate the proposed meta-learning emotion recognition algorithm to effectively improve the inter-subject generalizability in designing future affective brain-computer interfaces (BCIs).
Read full abstract