Abstract

Electroencephalogram (EEG) is widely used for the diagnosis of neurological conditions like epilepsy, neurodegenerative illnesses and sleep related disorders. Proper interpretation of EEG recordings requires the expertise of trained neurologists, a resource which is scarce in the developing world. Neurologists spend a significant portion of their time sifting through EEG recordings looking for abnormalities. Most recordings turn out to be completely normal, owing to the low yield of EEG tests. To minimize such wastage of time and effort, automatic algorithms could be used to provide pre-diagnostic screening to separate normal from abnormal EEG. Data driven machine learning offers a way forward however, design and verification of modern machine learning algorithms require properly curated labeled datasets. To avoid bias, deep learning based methods must be trained on large datasets from diverse sources. This work presents a new open-source dataset, named the NMT Scalp EEG Dataset, consisting of 2,417 recordings from unique participants spanning almost 625 h. Each recording is labeled as normal or abnormal by a team of qualified neurologists. Demographic information such as gender and age of the patient are also included. Our dataset focuses on the South Asian population. Several existing state-of-the-art deep learning architectures developed for pre-diagnostic screening of EEG are implemented and evaluated on the NMT, and referenced against baseline performance on the well-known Temple University Hospital EEG Abnormal Corpus. Generalization of deep learning based architectures across the NMT and the reference datasets is also investigated. The NMT dataset is being released to increase the diversity of EEG datasets and to overcome the scarcity of accurately annotated publicly available datasets for EEG research.

Highlights

  • Neurological disorders are among the major causes of disability and death worldwide and place a significant burden on the global health system

  • In phase-II, the model saved at the end of phase-I was reloaded and training recommenced on the entire training set until the loss function dropped back to the value it was at when phase-I ended

  • The accuracy achieved by ChronoNet on the TUH dataset was reported to be 86.75% in Roy et al (2019); the highest accuracy that we obtained in our implementation of this architecture was lower at 81% resulting in a noticeable gap between the results reported by Roy et al (2019) and what we observed

Read more

Summary

Introduction

Neurological disorders are among the major causes of disability and death worldwide and place a significant burden on the global health system. Signals are collected by mounting a certain number of electrodes (e.g., 32, 64, 128) on the scalp according to the standard montages (Chatrian et al, 1985) It is used widely in medical practice as an inexpensive tool for diagnosis of neurological disorders and observing patterns in various medication conditions due to excellent temporal resolution as compared to other brain imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT). The low maintenance and hardware costs of EEG make it an appealing tool for providing neurological care to patients in low-income countries. It is generally employed as the standard test for diagnosis and characterization of epilepsy and prognostication of patients in intensive care (Yamada and Meng, 2012; Tatum, 2014). Most hospitals and clinics generate EEG data in digital formats; if this data is curated, labeled, and stored, the resulting repositories can be very useful for training automated EEG analytic tools that can eventually be employed to assist neurologists and physicians in providing better care to patients with neurological disorders

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call