Abstract

Deep learning (DL) methods have been widely used in the field of seizure prediction from electroencephalogram (EEG) in recent years. However, DL methods usually have numerous multiplication operations resulting in high computational complexity. In addtion, most of the current approaches in this field focus on designing models with special architectures to learn representations, ignoring the use of intrinsic patterns in the data. In this study, we propose a simple and effective end-to-end adder network and supervised contrastive learning (AddNet-SCL). The method uses addition instead of the massive multiplication in the convolution process to reduce the computational cost. Besides, contrastive learning is employed to effectively use label information, points of the same class are clustered together in the projection space, and points of different class are pushed apart at the same time. Moreover, the proposed model is trained by combining the supervised contrastive loss from the projection layer and the cross-entropy loss from the classification layer. Since the adder networks uses the l1 -norm distance as the similarity measure between the input feature and the filters, the gradient function of the network changes, an adaptive learning rate strategy is employed to ensure the convergence of AddNet-CL. Experimental results show that the proposed method achieves 94.9% sensitivity, an area under curve (AUC) of 94.2%, and a false positive rate of (FPR) 0.077/h on 19 patients in the CHB-MIT database and 89.1% sensitivity, an AUC of 83.1%, and an FPR of 0.120/h in the Kaggle database. Competitive results show that this method has broad prospects in clinical practice.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.