Abstract

This article explores the use of single trial EEG signals to predict the voluntary movements of single hand and two hands. During single-hand movements, three kinds of task, grasping, releasing, and holding were considered. The tasks considered during two-hand movements are left and right grasping, left and right releasing, and holding. The subject performs the tasks spontaneously without waiting for and responding to any external cues. In addition, a neural adaptive noise canceller is developed that accomplishes eye blinks suppression. The neural adaptive filter is here implemented by means of a three-layer feed-forward neural network. The feature vectors are formed from the three channels (Fz, C3, and F3). We employ the multilayer perceptron (MLP) with back-propagation learning algorithm and Radial Basis Function (RBF) network with stochastic gradient learning rule for discriminating different patterns of the EEG signals. In the classical approach to RBF and MLP network implementation, the number of hidden units is predetermined. It, usually, results in too many hidden units. To overcome this drawback, we develop an enhanced resource-allocating network (RAN) for discriminating the EEG patterns. These networks start with no hidden units and grow by allocating new hidden units based on the novelty in the EEG signals, which arrive sequentially. The results of this analysis show that the neural networks would be able to detect the movements of a single hand and two hands with an average classification accuracy of 98.82% and 96.40%, respectively. Moreover, the RAN provides a reduction in the training epochs as compared to the MLP and RBF networks. This work represents a promising approach to control prosthesis device.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call