Abstract

In this work, we show the success of unsupervised transfer learning between Electroencephalographic (brainwave) classification and Electromyographic (muscular wave) domains with both MLP and CNN methods. To achieve this, signals are measured from both the brain and forearm muscles and EMG data is gathered from a 4-class gesture classification experiment via the Myo Armband, and a 3-class mental state EEG dataset is acquired via the Muse EEG Headband. A hyperheuristic multi-objective evolutionary search method is used to find the best network hyperparameters. We then use this optimised topology of deep neural network to classify both EMG and EEG signals, attaining results of 84.76% and 62.37% accuracy, respectively. Next, when pre-trained weights from the EMG classification model are used for initial distribution rather than random weight initialisation for EEG classification, 93.82%(+29.95) accuracy is reached. When EEG pre-trained weights are used for initial weight distribution for EMG, 85.12% (+0.36) accuracy is achieved. When the EMG network attempts to classify EEG, it outperforms the EEG network even without any training (+30.25% to 82.39% at epoch 0), and similarly the EEG network attempting to classify EMG data outperforms the EMG network (+2.38% at epoch 0). All transfer networks achieve higher pre-training abilities, curves, and asymptotes, indicating that knowledge transfer is possible between the two signal domains. In a second experiment with CNN transfer learning, the same datasets are projected as 2D images and the same learning process is carried out. In the CNN experiment, EMG to EEG transfer learning is found to be successful but not vice-versa, although EEG to EMG transfer learning did exhibit a higher starting classification accuracy. The significance of this work is due to the successful transfer of ability between models trained on two different biological signal domains, reducing the need for building more computationally complex models in future research.

Highlights

  • It is no secret that the hardware requirements of Deep Learning are far outgrowing the average consumer level of resource availability, even when a distributed processing device such as a GPU is considered [1]

  • Motivated by the small successes of cross-subject transfer learning within EEG and EMG domains independently, as well as the similar nature and behaviour of these biological signals, we propose to explore the potential of applying learnt knowledge from one biological signal domain to the other and vice versa

  • The model trained on EMG dataset is used to transfer knowledge to a model training to classify the EEG dataset and vice-versa. These are compared to their baseline non-transfer learning counterparts. This is carried out a second time with Convolutional Neural Networks where signals have been projected as raster images

Read more

Summary

Introduction

It is no secret that the hardware requirements of Deep Learning are far outgrowing the average consumer level of resource availability, even when a distributed processing device such as a GPU is considered [1]. In addition to this, limited data availability often hampers the machine learning process. It is for these reasons that researchers often find similar domains to transfer the learning between, effectively saving computational resources through said similarities by applying cross-domain interpretation. Cross-domain transfer learning is enabled in order to interpret new data [3], [4]. A Multilayer Perceptron (MLP) is an Artificial Neural Network (ANN) trained via validation, backpropagation of errors [17] and a subsequent gradient descent optimisation algorithm [18] in order to perform a classification or regression prediction task [19]. Learning by backpropagation is performed through a gradient descent optimisation algorithm that drives the updating of the weights within the neural network. Inspired by RMSProp [23] and Momentum [24], the main steps of ADAM are the following: 1) The exponentially weighted average of past gradients, vdW is calculated

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call