Abstract

Brain–computer interfaces (BCIs) allow users to communicate via brain activity alone. Many BCIs rely on the P300 and other event-related potentials (ERPs) that are elicited when target stimuli flash. Although there have been considerable research exploring ways to improve P300 BCIs, surprisingly little work has focused on new ways to change visual stimuli to elicit more recognizable ERPs. In this paper, we introduce a “combined” BCI based on P300 potentials and motion-onset visual evoked potentials (M-VEPs) and compare it with BCIs based on each simple approach (P300 and M-VEP). Offline data suggested that performance would be best in the combined paradigm. Online tests with adaptive BCIs confirmed that our combined approach is practical in an online BCI, and yielded better performance than the other two approaches (P<0.05) without annoying or overburdening the subject. The highest mean classification accuracy (96%) and practical bit rate (26.7bit/s) were obtained from the combined condition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.