Abstract

Because an augmented-reality-based brain-computer interface (AR-BCI) is easily disturbed by external factors, the traditional electroencephalograph (EEG) classification algorithms fail to meet the real-time processing requirements with a large number of stimulus targets or in a real environment. We propose a multi-target fast classification method for augmented-reality-based steady-state visual evoked potential (AR-SSVEP), using a convolutional neural network (CNN). To explore the availability and accuracy of high-efficiency multi-target classification methods in AR-SSVEP with a short stimulation duration, a similar stimulus layout was used for a computer screen (PC) and an optical see-through head-mounted display (OST-HMD) device (HoloLens). The experiment included nine flicker stimuli of different frequencies, and a multi-target fast classification method based on a CNN was constructed to complete nine classification tasks, for which the average accuracy of AR-BCI in our CNN model at 0.5- and 1-s stimulus duration was 67.93% and 80.83%, respectively. These results verified the efficacy of the proposed model for processing multi-target classification in AR-BCI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.