Abstract

With the flourishing development of deep learning (DL) and the convolution neural network (CNN), electroencephalogram-based (EEG) emotion recognition is occupying an increasingly crucial part in the field of brain-computer interface (BCI). However, currently employed architectures have mostly been designed manually by human experts, which is a time-consuming and labor-intensive process. In this paper, we proposed a novel neural architecture search (NAS) framework based on reinforcement learning (RL) for EEG-based emotion recognition, which can automatically design network architectures. The proposed NAS mainly contains three parts: search strategy, search space, and evaluation strategy. During the search process, a recurrent network (RNN) controller is used to select the optimal network structure in the search space. We trained the controller with RL to maximize the expected reward of the generated models on a validation set and force parameter sharing among the models. We evaluated the performance of NAS on the DEAP and DREAMER dataset. On the DEAP dataset, the average accuracies reached 97.94%, 97.74%, and 97.82% on arousal, valence, and dominance respectively. On the DREAMER dataset, average accuracies reached 96.62%, 96.29% and 96.61% on arousal, valence, and dominance, respectively. The experimental results demonstrated that the proposed NAS outperforms the state-of-the-art CNN-based methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call