Electroencephalography (EEG)-based brain-computer interfaces (BCIs) have a wide range of applications in affect recognition. The usage of irrelevant information channels when decoding brain activity from different regions can negatively impact the task at hand. Therefore, further research is needed to determine the ideal channels for detecting superior performances. In this study, deep learning models and 2-D image representations are used to assess the efficacy of EEG channels for subjective valence responses to energy data visualizations. The EEG signals are converted into spectrograms and Gramian Angular Field (GAF) images. A hybrid Convolution Neural Network (CNN)-Long-Short Term Memory (LSTM) model and LSTM network are used to extract feature sets from the converted images. These feature sets, after reducing their dimensionality using a principal component analysis (PCA) method, are fed into a boosting classifier (AdaBoost). The performance metrics of both models and 2-D representations are compared. The LSTM and CNN-LSTM models with GAFs and spectrograms achieve state-of-the-art accuracies. The CNN-LSTM model achieves the highest performance when using spectrograms for the F8 channel, while the GAF method performs relatively lower for the F3 channel. The CNN-LSTM model with the spectrogram method produces reliable results, and hence, this method is used for further analysis of the remaining channel pairs. This study suggests that using a single EEG channel is more effective than using multiple channels for recognizing emotions in energy data visualizations. The proposed methodology is an efficient way to select channels for this purpose.
Read full abstract