Abstract

Feature embeddings derived from continuous mapping using the deep neural network are critical for accurate classification in seizure prediction tasks. However, the embeddings of individual electroencephalogram (EEG) samples learned through modern decoding algorithms may contain ambiguous and noisy representations, owing to the susceptibility of weak EEG signals to interference from other signals unrelated to EEG activity. To address this issue, we consider data uncertainty learning (DUL), which models each representation of the EEG sample as a Gaussian or Laplacian distribution to mitigate potential noise interference and enhance model robustness. Moreover, data uncertainty learning for transformer architectures is seldom explored due to the limitation of multi-headed self-attention mechanisms in processing local features and the vanishing of potential top-level gradients. In this study, we introduce a novel hybrid visual transformer (HViT) architecture, which enhances the processing capability of localized features in the transformer through the convolutional neural network (CNN). Concretely, we learn the mean of the distribution using the HViT, and an additional branch is designed to capture the variance of the Gaussian distribution or the scale of the Laplacian distribution during training. We also propose a learnable manner to learn constraint coefficients in the loss functions for different patients, resulting in better optimization across patients. In addition, we introduce a simple uncertainty quantification method for each alarm of the k-of-n continuous prediction strategy by utilizing the continuity of the EEG signals. Empirical evaluations on two publicly available epilepsy datasets demonstrate the superiority of our DUL method and the effectiveness of the proposed HViT architecture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call