For dimensional emotion recognition, electroencephalography (EEG) signals and electrooculogram (EOG) signals are often combined to improve the performance of classifiers, as each of them provides complementary features to the other. In this article, we combine the EEG signal on the relevant channels with the EOG signal to boost the recognition accuracy. We first explore the mutual information (MI) of all EEG channels and only select emotion-related channels, i.e., channels with more MI are retained, since the emotion recognition performance can be degraded by the interference between uncorrelated channels, while the computational complexity is significant if all EEG channels are used for recognition. While the optimal lengths of EEG and EOG signals for emotion recognition are still uncertain, we systematically investigate the effects of time-window size on emotion recognition. This strategy not only increases the number of training samples, but also reduces the feature redundancy. At this stage, we not only extract multiple statistical features but also employ the increment entropy to find abrupt changes in EEG signals. The experimental results show that 13 out of 32 EEG channels were selected by the proposed channel selection algorithm, and these selected channels can already produce accurate emotion predictions. We found that using optimal time-windows to split EEG and EOG signals into several thin slices and then combine them can further enhance the emotion recognition performance, where the time-windows of 4, 5, 6, and 10 s allow the combined signals to achieve very high accuracy.
Read full abstract