Abstract

Background and ObjectiveDespite many models have been proposed for brain visual perception and content understanding via electroencephalograms (EEGs), due to the lack of research on the inherent temporal relationship, EEG-based visual object classification still demands the improvement on its accuracy and computation complexity. MethodsTo take full advantage of the uneven visual feature saturation between time segments, an end-to-end attention-based Bi-LSTM Method is proposed, named Bi-LSTM-AttGW. Two attention strategies are introduced to Bi-LSTM framework. The attention gate replaces the forget gate in traditional LSTM. It is only relevant to the historical cell state, and not related to the current input. Hence, the attention gate can greatly reduce the number of training parameters. Moreover, the attention weighting method is applied to Bi-LSTM output, and it can explore the most decisive information. ResultsThe best classification accuracy achieved by Bi-LSTM-AttGW model is 99.50%. Compared with the state-of-art algorithms and baseline models, the proposed method has great advantages in classification performance and computational complexity. Considering brain region level contribution on visual cognition task, we also verify our method using EEG signals collected from the frontal and occipital regions, that are highly correlated with visual perception tasks. ConclusionsThe results show promise towards the idea that human brain activity related to visual recognition can be more effectively decoded by neural networks with neural mechanism. The experimental results not only could provide strong support for the modularity theory about the brain cognitive function, but show the superiority of the proposed Bi-LSTM model with attention mechanism again.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call