Abstract

Deep learning models, especially convolutional neural networks (CNNs), have shown remarkable achievements in clinical electroencephalography (EEG) pathology detection, due to their prominent capability in automatic feature learning. However, most existing CNN-based models ignore the spatial correlations of the EEG signals and the latent complementary information provided by different convolutional layers, which is essentially an important clue for pathology detection. To this end, we propose SCNet, a Spatial Feature Fused Convolutional Network, for multi-channel EEG pathology detection. SCNet first designs a spatial information learning mechanism to capture the channel-wise spatial correlations via global pooling strategies. A multi-level feature fusion module, which combines feature maps learned by different convolutional layers, is then devised to fully leverage the complementarity of multi-level features. The efficacy of the proposed SCNet is experimentally evaluated on both datasets, and the results obtained outperform current representatives in the literature. Ablation studies on the spatial information learning and multi-level fusion modules further confirm their great potential in EEG-based diagnostic applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.