Abstract

This work proposes an intrinsically explainable, straightforward method to decode P300 waveforms from electroencephalography (EEG) signals, overcoming the black box nature of deep learning techniques. The proposed method allows convolutional neural networks to decode information from images, an area where they have achieved astonishing performance. By plotting the EEG signal as an image, it can be both visually interpreted by physicians and technicians and detected by the network, offering a straightforward way of explaining the decision. The identification of this pattern is used to implement a P300-based speller device, which can serve as an alternative communication channel for persons affected by amyotrophic lateral sclerosis (ALS). This method is validated by identifying this signal by performing a brain–computer interface simulation on a public dataset from ALS patients. Letter identification rates from the speller on the dataset show that this method can identify the P300 signature on the set of 8 patients. The proposed approach achieves similar performance to other state-of-the-art proposals while providing clinically relevant explainability (XAI).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.