Abstract

Recent years have shown a growth in the application of deep learning architectures such as convolutional neural networks (CNNs), to electrophysiology analysis. However, using neural networks with raw time-series data makes explainability a significant challenge. Multiple explainability approaches have been developed for insight into the spectral features learned by CNNs from EEG. However, across electrophysiology modalities, and even within EEG, there are many unique waveforms of clinical relevance. Existing methods that provide insight into waveforms learned by CNNs are of questionable utility. In this study, we present a novel model visualization-based approach that analyzes the filters in the first convolutional layer of the network. To our knowledge, this is the first method focused on extracting explainable information from EEG waveforms learned by CNNs while also providing insight into the learned spectral features. We demonstrate the viability of our approach within the context of automated sleep stage classification, a well-characterized domain that can help validate our approach. We identify 3 subgroups of filters with distinct spectral properties, determine the relative importance of each group of filters, and identify several unique waveforms learned by the classifier that were vital to the classifier performance. Our approach represents a significant step forward in explainability for electrophysiology classifiers, which we also hope will be useful for providing insights in future studies. Clinical Relevance- Our approach can assist with the development and validation of clinical time-series classifiers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call