Abstract

Recently, deep learning has aroused wide interest in machine learning fields. Deep learning is a multilayer perceptron artificial neural network algorithm. Deep learning has the advantage of approximating the complicated function and alleviating the optimization difficulty associated with deep models. Multilayer extreme learning machine (MLELM) is a learning algorithm of an artificial neural network which takes advantages of deep learning and extreme learning machine. Not only does MLELM approximate the complicated function but it also does not need to iterate during the training process. We combining with MLELM and extreme learning machine with kernel (KELM) put forward deep extreme learning machine (DELM) and apply it to EEG classification in this paper. This paper focuses on the application of DELM in the classification of the visual feedback experiment, using MATLAB and the second brain-computer interface (BCI) competition datasets. By simulating and analyzing the results of the experiments, effectiveness of the application of DELM in EEG classification is confirmed.

Highlights

  • Brain-computer interface (BCI) is a kind of technology that enables people to communicate with a computer or to control devices with EEG signals [1]

  • This paper focuses on the application of deep extreme learning machine (DELM) in the classification of the visual feedback experiment, using MATLAB and the second brain-computer interface (BCI) competition datasets

  • It is clear that the average error value of DELM on BCI competition II dataset IA achieves 47.89%, but the min error value has reduced to 39.44%, which is much lower than the results of BCI competition II

Read more

Summary

Introduction

Brain-computer interface (BCI) is a kind of technology that enables people to communicate with a computer or to control devices with EEG signals [1]. In 2013, MLELM was proposed by Kasun et al Like other deep learning models, MLELM makes use of unsupervised learning to train the parameters in each layer, but the difference is that MLELM does not need to fine-tune the network. The model of MLELM is shown, Vi represents the output weights of ELM-AE, the input of ELM-AE is Hi−1, and the number of ELM-AE hidden layer nodes is identical to the number of MLELM ith hidden nodes when the parameters between the MLELM ith hidden layer and the MLELM (i − 1) hidden layer are trained by ELM-AE. This algorithm combining with MLELM and KELM is called deep extreme learning machine (DELM): Hk+1Hk+1T = ΩDELM.

Experiments and Analysis
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call