Abstract

Structural Plasticity (SP) in the brain is a process that allows structural neuronal changes, in response to learning. Spiking Neural Networks (SNN) are an emerging form of artificial neural networks that use brain-inspired techniques to learn. However, the application of SP in SNNs, its impact on overall learning, and network behaviour is rarely explored. In the present study, we use an SNN with a single hidden layer, to apply SP in classifying Electroencephalography (EEG) signals of two publicly available datasets. We considered classification accuracy as the learning capability and applied metaheuristics to derive the optimised number of neurons for the hidden layer along with other hyperparameters of the network. The optimised structure was then compared with overgrown and undergrown structures to compare the accuracy, stability, and behaviour of the network properties. Networks with SP yielded ~94% and ~92% accuracies in classifying wrist positions and mental states(stressed vs relaxed) respectively. The same SNN developed for mental stress classification produced ~77% and ~73% accuracies in classifying arousal and valence states. Moreover, the networks with SP demonstrated superior performance stability during iterative random initiations. Interestingly, these networks had a smaller number of inactive neurons and a preference for lowered neuron firing thresholds. This research highlights the importance of systematically selecting the hidden layer neurons over arbitrary settings, particularly for SNNs using Spike Time Dependent Plasticity learning and provides potential findings that may lead to the development of SP learning algorithms for SNNs.

Highlights

  • Spiking Neural Networks (SNNs) referred to as the third generation of neural networks [1], are capable of accommodating pattern recognition and function approximations with greater computational efficiency [2]

  • We fully connected the input to the hidden layer with pseudorandom weight initiations following the gaussian distribution and an unsupervised learning strategy based on spike time dependency(STDP) algorithm [13], [14] to update weights

  • We used a version of the Dynamic Evolving Spiking Neural Network [33] algorithm to create and adapt weights from hidden to output layer

Read more

Summary

INTRODUCTION

Spiking Neural Networks (SNNs) referred to as the third generation of neural networks [1], are capable of accommodating pattern recognition and function approximations with greater computational efficiency [2]. As discussed [21], [22] are methods where SP can be implemented to increase resource efficiency which is important in hardware implementations of SNNs. In contrast, [17]–[20] introduces SP as a form of learning method where neurons are added and/or removed with predefined thresholds to obtain desired spiking patterns. [17]–[20] introduces SP as a form of learning method where neurons are added and/or removed with predefined thresholds to obtain desired spiking patterns These methods are not intended to explore the impact of η under STDP learning or the impact of the same on overall ML performance and intrinsic properties of the network. This same data extraction method with DEAP dataset was carried out by [29], [30] and [31] which enables performance comparison

DATA ENCODING
SPIKE TIME DEPENDENT PLASTICITY
CLASSIFIER LEARNING
NETWORK OPTIMISATION
RESULTS
DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call