Abstract

Electroencephalography (EEG) datasets are often small and high dimensional, owing to cumbersome recording processes. In these conditions, powerful machine learning techniques are essential to deal with the large amount of information and overcome the curse of dimensionality. Artificial Neural Networks (ANNs) have achieved promising performance in EEG-based Brain-Computer Interface (BCI) applications, but they involve computationally intensive training algorithms and hyperparameter optimization methods. Thus, an awareness of the quality-cost trade-off, although usually overlooked, is highly beneficial. In this paper, we apply a hyperparameter optimization procedure based on Genetic Algorithms to Convolutional Neural Networks (CNNs), Feed-Forward Neural Networks (FFNNs), and Recurrent Neural Networks (RNNs), all of them purposely shallow. We compare their relative quality and energy-time cost, but we also analyze the variability in the structural complexity of networks of the same type with similar accuracies. The experimental results show that the optimization procedure improves accuracy in all models, and that CNN models with only one hidden convolutional layer can equal or slightly outperform a 6-layer Deep Belief Network. FFNN and RNN were not able to reach the same quality, although the cost was significantly lower. The results also highlight the fact that size within the same type of network is not necessarily correlated with accuracy, as smaller models can and do match, or even surpass, bigger ones in performance. In this regard, overfitting is likely a contributing factor since deep learning approaches struggle with limited training examples.

Highlights

  • Over the last decades, computational power has experienced significant increases thanks to a wide array of new technologies and computing paradigms

  • The contents of this section are organized as follows: firstly, the experimental setup is described, including software, hardware, and parameters for the Genetic Algorithm (GA); secondly, the effects of Feature Selection (FS) and the subsequent hyperparameter optimization procedure are discussed in a separate section for each type of neural network (CNN, Feed-Forward Neural Networks (FFNNs), and Recurrent Neural Networks (RNNs)); thirdly, the three optimized alternatives are compared in terms of classification accuracy and energy-time cost

  • The difficulty is further amplified when training data is scarce in proportion, as is usual in Brain-Computer Interface (BCI) applications

Read more

Summary

Introduction

Computational power has experienced significant increases thanks to a wide array of new technologies and computing paradigms. Many problems can be tackled and many lines of research have appeared with the advances in other fields. Bioinformatics attempts to understand biological data through computer science, mathematics, and statistics. Applications to gene expression analysis [1,2,3] or brain activity analysis are popular examples.

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call