Abstract

Deep stacked networks (DSNs) have shown promising performance in electroencephalogram (EEG) pattern decoding by recursively enhancing the separability of input features with the supervised information in the stacks. However, most DSN-based models take pre-extracted EEG features as input, which adversely affects the learning of high-level EEG feature representation when the informative neural patterns are not fully captured by the input features. To overcome this issue, we propose a novel deep stacked architecture called Deep Stacked Feature Representation (DSFR) that allows a network to be fed with raw EEG data for automatic learning of the high-level representation and abstraction. The proposed deep stacked architecture utilizes a series of feature decoding modules (FDMs) as the base building blocks, which incorporate random projections as its stacking elements. In each FDM, a feature extractor common spatial pattern (CSP) and a matrix classifier support matrix machine (SMM) are included and stacked in a chain structure. The random projections of the predictions of SMM from all the previous FDMs are integrated into the raw EEG data, which are then fed into the CSP in the subsequent FDMs to generate the EEG feature representation recursively. The proposed DSFR is carried out in an efficient feed-forward way and does not need parameter fine-tuning with backpropagation, resulting in a simplified optimization process. Extensive experiments are conducted on three publicly available motor imagery (MI)-based EEG datasets to evaluate the performance of the proposed DSFR method. The results show that DSFR outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call