Abstract
A multi-view self-attention module is proposed and paired with a multi-scale convolutional model to builda multi-view self-attention convolutional network for multi-channel EEG emotion recognition. First, timeand frequency domain characteristics are extracted from multi-channel EEG signals, and a three-dimensionalfeature matrix is built using spatial mapping connections. Then, a multi-scale convolutional network extractsthe high-level abstract features from the feature matrix, and a multi-view self-attention network strengthensthe features. Finally, use the multilayer perceptron for sentiment classification. The experimental results revealthat the multi-view self-attention convolutional network can effectively integrate the time domain, frequencydomain, and spatial domain elements of EEG signals using the DEAP public emotion dataset. The multi-viewself-attention module can eliminate superfluous data, apply attention weight to the network to hasten networkconvergence, and enhance model recognition precision.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have