Abstract

The brain's response to visual stimuli of different colors might be used in a brain-computer interface (BCI) paradigm, for letting a user control their surroundings by looking at specific colors. Allowing the user to control certain elements in its environment, such as lighting and doors, by looking at corresponding signs of different colors could serve as an intuitive interface. This paper presents work on the development of an intra-subject classifier for red, green, and blue (RGB) visual evoked potentials (VEPs) in recordings performed with an electroencephalogram (EEG). Three deep neural networks (DNNs), proposed in earlier papers, were employed and tested for data in source- and electrode space. All the tests performed in electrode space yielded better results than those in source space. The best classifier yielded an accuracy of 77% averaged over all subjects, with the best subject having an accuracy of 96%.Clinical relevance- This paper demonstrates that deep learning can be used to classify between red, green and blue visual evoked potentials in EEG recordings with an average accuracy of 77%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.