Abstract

In this paper, we propose a novel neural network architecture called CTCNet. First, we adopt a multi-scale convolutional neural network (MSCNN) to extract low and high-frequency features, adaptive channel feature recalibration (ACFR) to enhance the model’s sensitivity to important channel features in the feature maps and reduce dependence on irrelevant or redundant features, a multi-scale dilated convolutional block (MSDCB) to capture characteristics of different types among feature channels. Second, we use Transformer to extract global temporal context features. Third, we employ capsule network to capture spatial location relationships among EEG features and refine these features. Besides, the capsule network module is used as our model’s classifier to classify the final results. It is worth noting that our model better solves the problem that previous researches failed to take into account the simultaneous extraction of local features and global temporal context characteristics of EEG signals, and ignored the spatial location relationships between these features. Eventually, we assess our model on three datasets and it achieves better or comparable performance than most state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call