Cross subject Electroencephalogram (EEG) emotion recognition refers to the process of utilizing electroencephalogram signals to recognize and classify emotions across different individuals. It tracks neural electrical patterns, and by analyzing these signals, it's possible to infer a person's emotional state. The objective of cross-subject recognition is to create models or algorithms that can reliably detect emotions in both the same person and several other people. Accurately predicting emotions poses challenges due to dynamic traits. Models struggle with feature extraction, convergence, and negative transfer issues, hindering cross subject emotion recognition. The proposed model employs thorough signal preprocessing, Short-Time Geodesic Flow Kernel Fourier Transform (STGFKFT) for feature extraction, enhancing classifiers' accuracy. Multi-view sheaf attention improves feature discrimination, while the Multi-Scale Convolutional Conditional Invertible Puma Discriminator Neural Network (MSCCIPDNN) framework ensures generalization. Efficient computational techniques and the puma optimization algorithm enhance model robustness and convergence. The suggested framework demonstrates extraordinary success with high accuracy, of 99.5%, 99% and 99.50% for SEED, SEED-IV, and DEAP dataset sequentially. By incorporating these techniques, the proposed method aims to precisely recognition emotions, and accurately captures the features, thereby overcoming the limitations of existing methodologies.
Read full abstract