Abstract

There is an increasing interest in applying artificial intelligence techniques to forecast epileptic seizures. In particular, machine learning algorithms could extract nonlinear statistical regularities from electroencephalographic (EEG) time series that can anticipate abnormal brain activity. The recent literature reports promising results in seizure detection and prediction tasks using machine and deep learning methods. However, performance evaluation is often based on questionable randomized cross-validation schemes, which can introduce correlated signals (e.g., EEG data recorded from the same patient during nearby periods of the day) into the partitioning of training and test sets. The present study demonstrates that the use of more stringent evaluation strategies, such as those based on leave-one-patient-out partitioning, leads to a drop in accuracy from about 80% to 50% for a standard eXtreme Gradient Boosting (XGBoost) classifier on two different data sets. Our findings suggest that the definition of rigorous evaluation protocols is crucial to ensure the generalizability of predictive models before proceeding to clinical trials.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.