Abstract
Manual inspection of Electroencephalography (EEG) signals to detect epileptic seizures is time-consuming and prone to inter-rater variability. Moreover, EEG signals are contaminated with different noise sources, e.g., patient movement during seizures, making the accurate identification of seizure activities challenging. In a Multi-View seizure detection system, since seizures do not uniformly affect the brain, some views likely play a more significant role in detecting seizures and should therefore be assigned a higher weight in the concatenation step. To address this dynamic weight assignment issue and also create a more interpretable model, in this work, we propose a fusion attentive deep multi-view network (fAttNet). The fAttNet combines temporal multi-channel EEG signals, wavelet packet decomposition (WPD), and hand-engineered features as three key views. We also propose an artifact rejection approach to remove unwanted signals not originating from the brain. Experimental results on the Temple University Hospital (TUH) seizure database demonstrate that the proposed method has increased performance over the state-of-the-art methods, raising accuracy, and F1-score from 0.82 to 0.86, and 0.78 to 0.81, respectively. More importantly, the proposed method is interpretable for medical professionals, assisting clinicians in identifying the regions of the brain involved in the seizures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.