Abstract

Epilepsy can be controlled by targeted treatment of the epileptogenic zone (EZ), the region in the brain where seizures originate. Identification of the EZ often requires visual inspection of invasive EEG recordings and thus relies heavily on placement of electrodes, such that they cover the EZ. A dense brain coverage would be ideal to obtain accurate boundaries of the EZ but is not possible due to surgical limitations. This gives rise to the "missing electrode problem", where clinicians desire to know what neural activity looks like between implanted electrodes. In this paper, we compare two methods for time series estimation of missing stereotactic EEG (SEEG) recordings. Specifically, we represent SEEG data as a sequence of Linear Time-Invariant (LTI) models. We then remove one signal from the data set and apply two different algorithms to simultaneously estimate the LTI models and the "missing" signal: (i) a Reduced-Order Observer in combination with Least Squares Estimation and (ii) an Expectation Maximization based Kalman Filter. The performance of each approach is evaluated in terms of (i) estimation error, (ii) sensitivity to initial conditions, and (iii) algorithm run-time. We found that the EM approach has smaller estimation errors and is less sensitive to initial conditions. However, the reduced-order observer has a run-time that is orders of magnitude faster.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.