Abstract

The development of alternative pathways to communicate with outside world independent on language or limb motions is important. However, one of the challenges is the multi-dimensional and accurate control of robot using head signals. This paper proposes an end-to-end CNN with residual block model, which uses the filtered raw signal as input and detects left eye blink, right eye blink, continuous eye blink, and grit teeth tasks, and develops an online brain–computer interface (BCI) system to control the left turn, right turn, forward, stop, and speed of the mobile robot platform, TurtleBot. The portable EEG measurement device TGAM module is used to collect single channel dry electrode EEG signals. Through time domain (TD) and time–frequency analysis, the time period and frequency range of each task signal are analyzed, which lays the foundation for analysis window parameter determination and feature extraction. The proposed model uses one-dimensional convolution to extract local features and two-dimensional convolution to extract global features, and the average detection accuracy is 97.399%, which is significantly higher than that of the state-of-the-art machine learning classifiers with the TD and frequency domain (FD) fusion features as input (p<0.01). The detection performance of FD features outperforms the TD features (p<0.01). Online BCI system based on the proposed method is developed to interact with the TurtleBot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call