Abstract

In our life, emotions often have a profound impact on human behavior, especially for drivers, as negative emotions can increase the risk of traffic accidents. As such, it is imperative to accurately discern the emotional states of drivers in order to preemptively address and mitigate any negative emotions that may otherwise manifest and compromise driving behavior. In contrast to many current studies that rely on complex and deep neural network models to achieve high accuracy, this research aims to explore the potential of achieving high recognition accuracy using shallow neural networks through restructuring the structure and dimensions of the data. In this study, we propose an end-to-end convolutional neural network (CNN) model called simply ameliorated CNN (SACNN) to address the issue of low accuracy in cross-subject emotion recognition. We extracted features and converted dimensions of EEG signals from the SEED dataset from the BCMI Laboratory to construct 62-dimensional data, and obtained the optimal model configuration through ablation experiments. To further improve recognition accuracy, we selected the top 10 channels with the highest accuracy by separately training the EEG data of each of the 62 channels. The results showed that the SACNN model achieved an accuracy of 88.16% based on raw cross-subject data, and an accuracy of 91.85% based on EEG channel data from the top 10 channels. In addition, we explored the impact of the position of the BN and dropout layers on the model through experiments, and found that a targeted shallow CNN model performed better than deeper and larger perceptual field CNN models. Furthermore, we discuss herein the future issues and challenges of driver emotion recognition in promising smart city applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call